00:00:00.001 Started by upstream project "autotest-per-patch" build number 127155 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.031 Fetching changes from the remote Git repository 00:00:00.032 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.077 > git --version # 'git version 2.39.2' 00:00:00.077 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.119 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.119 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.561 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.574 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.587 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:02.587 > git config core.sparsecheckout # timeout=10 00:00:02.599 > git read-tree -mu HEAD # timeout=10 00:00:02.617 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:02.641 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:02.642 > git rev-list --no-walk 178f233a2a13202f6c9967830fd93e30560100d5 # timeout=10 00:00:02.817 [Pipeline] Start of Pipeline 00:00:02.832 [Pipeline] library 00:00:02.834 Loading library shm_lib@master 00:00:02.834 Library shm_lib@master is cached. Copying from home. 00:00:02.863 [Pipeline] node 00:00:02.880 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:02.882 [Pipeline] { 00:00:02.892 [Pipeline] catchError 00:00:02.893 [Pipeline] { 00:00:02.906 [Pipeline] wrap 00:00:02.916 [Pipeline] { 00:00:02.925 [Pipeline] stage 00:00:02.927 [Pipeline] { (Prologue) 00:00:03.118 [Pipeline] sh 00:00:03.398 + logger -p user.info -t JENKINS-CI 00:00:03.416 [Pipeline] echo 00:00:03.417 Node: WFP19 00:00:03.423 [Pipeline] sh 00:00:03.718 [Pipeline] setCustomBuildProperty 00:00:03.731 [Pipeline] echo 00:00:03.733 Cleanup processes 00:00:03.738 [Pipeline] sh 00:00:04.023 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.023 3906653 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.036 [Pipeline] sh 00:00:04.316 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.316 ++ grep -v 'sudo pgrep' 00:00:04.316 ++ awk '{print $1}' 00:00:04.316 + sudo kill -9 00:00:04.316 + true 00:00:04.327 [Pipeline] cleanWs 00:00:04.334 [WS-CLEANUP] Deleting project workspace... 00:00:04.334 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.339 [WS-CLEANUP] done 00:00:04.341 [Pipeline] setCustomBuildProperty 00:00:04.351 [Pipeline] sh 00:00:04.625 + sudo git config --global --replace-all safe.directory '*' 00:00:04.712 [Pipeline] httpRequest 00:00:04.731 [Pipeline] echo 00:00:04.732 Sorcerer 10.211.164.101 is alive 00:00:04.738 [Pipeline] httpRequest 00:00:04.742 HttpMethod: GET 00:00:04.742 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:04.743 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:04.744 Response Code: HTTP/1.1 200 OK 00:00:04.744 Success: Status code 200 is in the accepted range: 200,404 00:00:04.745 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.502 [Pipeline] sh 00:00:05.786 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.802 [Pipeline] httpRequest 00:00:05.815 [Pipeline] echo 00:00:05.816 Sorcerer 10.211.164.101 is alive 00:00:05.826 [Pipeline] httpRequest 00:00:05.840 HttpMethod: GET 00:00:05.842 URL: http://10.211.164.101/packages/spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:05.844 Sending request to url: http://10.211.164.101/packages/spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:05.845 Response Code: HTTP/1.1 200 OK 00:00:05.845 Success: Status code 200 is in the accepted range: 200,404 00:00:05.845 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:27.116 [Pipeline] sh 00:00:27.402 + tar --no-same-owner -xf spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:30.699 [Pipeline] sh 00:00:30.981 + git -C spdk log --oneline -n5 00:00:30.981 415e0bb41 pkgdep/git: Add extra libnl-genl dev package to QAT's dependencies 00:00:30.981 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:00:30.981 50222f810 configure: don't exit on non Intel platforms 00:00:30.981 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:30.981 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:30.994 [Pipeline] } 00:00:31.011 [Pipeline] // stage 00:00:31.020 [Pipeline] stage 00:00:31.023 [Pipeline] { (Prepare) 00:00:31.039 [Pipeline] writeFile 00:00:31.055 [Pipeline] sh 00:00:31.337 + logger -p user.info -t JENKINS-CI 00:00:31.371 [Pipeline] sh 00:00:31.656 + logger -p user.info -t JENKINS-CI 00:00:31.667 [Pipeline] sh 00:00:31.948 + cat autorun-spdk.conf 00:00:31.948 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.948 SPDK_TEST_BLOCKDEV=1 00:00:31.948 SPDK_TEST_ISAL=1 00:00:31.948 SPDK_TEST_CRYPTO=1 00:00:31.948 SPDK_TEST_REDUCE=1 00:00:31.948 SPDK_TEST_VBDEV_COMPRESS=1 00:00:31.948 SPDK_RUN_UBSAN=1 00:00:31.948 SPDK_TEST_ACCEL=1 00:00:31.955 RUN_NIGHTLY=0 00:00:31.960 [Pipeline] readFile 00:00:31.986 [Pipeline] withEnv 00:00:31.988 [Pipeline] { 00:00:32.002 [Pipeline] sh 00:00:32.283 + set -ex 00:00:32.283 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:32.283 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:32.283 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.283 ++ SPDK_TEST_BLOCKDEV=1 00:00:32.283 ++ SPDK_TEST_ISAL=1 00:00:32.283 ++ SPDK_TEST_CRYPTO=1 00:00:32.283 ++ SPDK_TEST_REDUCE=1 00:00:32.283 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:32.283 ++ SPDK_RUN_UBSAN=1 00:00:32.283 ++ SPDK_TEST_ACCEL=1 00:00:32.283 ++ RUN_NIGHTLY=0 00:00:32.283 + case $SPDK_TEST_NVMF_NICS in 00:00:32.283 + DRIVERS= 00:00:32.283 + [[ -n '' ]] 00:00:32.283 + exit 0 00:00:32.291 [Pipeline] } 00:00:32.306 [Pipeline] // withEnv 00:00:32.311 [Pipeline] } 00:00:32.322 [Pipeline] // stage 00:00:32.330 [Pipeline] catchError 00:00:32.332 [Pipeline] { 00:00:32.346 [Pipeline] timeout 00:00:32.347 Timeout set to expire in 1 hr 0 min 00:00:32.348 [Pipeline] { 00:00:32.361 [Pipeline] stage 00:00:32.363 [Pipeline] { (Tests) 00:00:32.376 [Pipeline] sh 00:00:32.657 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:32.657 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:32.657 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:32.657 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:32.657 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:32.657 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:32.657 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:32.657 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:32.657 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:32.657 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:32.657 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:32.657 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:32.657 + source /etc/os-release 00:00:32.657 ++ NAME='Fedora Linux' 00:00:32.657 ++ VERSION='38 (Cloud Edition)' 00:00:32.657 ++ ID=fedora 00:00:32.657 ++ VERSION_ID=38 00:00:32.657 ++ VERSION_CODENAME= 00:00:32.657 ++ PLATFORM_ID=platform:f38 00:00:32.657 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.657 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.657 ++ LOGO=fedora-logo-icon 00:00:32.657 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.657 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.657 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.657 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.657 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.657 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.657 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.657 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.657 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.657 ++ SUPPORT_END=2024-05-14 00:00:32.657 ++ VARIANT='Cloud Edition' 00:00:32.657 ++ VARIANT_ID=cloud 00:00:32.657 + uname -a 00:00:32.657 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.657 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:36.846 Hugepages 00:00:36.846 node hugesize free / total 00:00:36.846 node0 1048576kB 0 / 0 00:00:36.846 node0 2048kB 0 / 0 00:00:36.846 node1 1048576kB 0 / 0 00:00:36.846 node1 2048kB 0 / 0 00:00:36.846 00:00:36.846 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.846 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:36.846 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:36.846 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:36.846 + rm -f /tmp/spdk-ld-path 00:00:36.847 + source autorun-spdk.conf 00:00:36.847 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.847 ++ SPDK_TEST_BLOCKDEV=1 00:00:36.847 ++ SPDK_TEST_ISAL=1 00:00:36.847 ++ SPDK_TEST_CRYPTO=1 00:00:36.847 ++ SPDK_TEST_REDUCE=1 00:00:36.847 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:36.847 ++ SPDK_RUN_UBSAN=1 00:00:36.847 ++ SPDK_TEST_ACCEL=1 00:00:36.847 ++ RUN_NIGHTLY=0 00:00:36.847 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:36.847 + [[ -n '' ]] 00:00:36.847 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:36.847 + for M in /var/spdk/build-*-manifest.txt 00:00:36.847 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:36.847 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:36.847 + for M in /var/spdk/build-*-manifest.txt 00:00:36.847 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:36.847 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:36.847 ++ uname 00:00:36.847 + [[ Linux == \L\i\n\u\x ]] 00:00:36.847 + sudo dmesg -T 00:00:36.847 + sudo dmesg --clear 00:00:36.847 + dmesg_pid=3908259 00:00:36.847 + [[ Fedora Linux == FreeBSD ]] 00:00:36.847 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.847 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.847 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:36.847 + [[ -x /usr/src/fio-static/fio ]] 00:00:36.847 + export FIO_BIN=/usr/src/fio-static/fio 00:00:36.847 + FIO_BIN=/usr/src/fio-static/fio 00:00:36.847 + sudo dmesg -Tw 00:00:36.847 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:36.847 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:36.847 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:36.847 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.847 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.847 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:36.847 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.847 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.847 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:36.847 Test configuration: 00:00:36.847 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.847 SPDK_TEST_BLOCKDEV=1 00:00:36.847 SPDK_TEST_ISAL=1 00:00:36.847 SPDK_TEST_CRYPTO=1 00:00:36.847 SPDK_TEST_REDUCE=1 00:00:36.847 SPDK_TEST_VBDEV_COMPRESS=1 00:00:36.847 SPDK_RUN_UBSAN=1 00:00:36.847 SPDK_TEST_ACCEL=1 00:00:36.847 RUN_NIGHTLY=0 11:42:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:36.847 11:42:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:36.847 11:42:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:36.847 11:42:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:36.847 11:42:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.847 11:42:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.847 11:42:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.847 11:42:22 -- paths/export.sh@5 -- $ export PATH 00:00:36.847 11:42:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.847 11:42:22 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:36.847 11:42:22 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:36.847 11:42:22 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721900542.XXXXXX 00:00:36.847 11:42:22 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721900542.978JOu 00:00:36.847 11:42:22 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:36.847 11:42:22 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:36.847 11:42:22 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:36.847 11:42:22 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:36.847 11:42:22 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:36.847 11:42:22 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:36.847 11:42:22 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:36.847 11:42:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.847 11:42:22 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:36.847 11:42:22 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:36.847 11:42:22 -- pm/common@17 -- $ local monitor 00:00:36.847 11:42:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.847 11:42:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.847 11:42:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.847 11:42:22 -- pm/common@21 -- $ date +%s 00:00:36.847 11:42:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.847 11:42:22 -- pm/common@21 -- $ date +%s 00:00:36.847 11:42:22 -- pm/common@25 -- $ sleep 1 00:00:36.847 11:42:22 -- pm/common@21 -- $ date +%s 00:00:36.847 11:42:22 -- pm/common@21 -- $ date +%s 00:00:36.847 11:42:22 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900542 00:00:36.847 11:42:22 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900542 00:00:36.847 11:42:22 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900542 00:00:36.847 11:42:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900542 00:00:36.847 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900542_collect-vmstat.pm.log 00:00:36.847 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900542_collect-cpu-load.pm.log 00:00:36.847 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900542_collect-cpu-temp.pm.log 00:00:36.847 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900542_collect-bmc-pm.bmc.pm.log 00:00:37.787 11:42:23 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:37.787 11:42:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:37.787 11:42:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:37.787 11:42:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:37.787 11:42:23 -- spdk/autobuild.sh@16 -- $ date -u 00:00:37.787 Thu Jul 25 09:42:23 AM UTC 2024 00:00:37.787 11:42:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:37.787 v24.09-pre-312-g415e0bb41 00:00:37.787 11:42:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:37.787 11:42:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:37.787 11:42:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:37.787 11:42:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:37.787 11:42:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:37.787 11:42:23 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.787 ************************************ 00:00:37.787 START TEST ubsan 00:00:37.787 ************************************ 00:00:37.787 11:42:23 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:37.787 using ubsan 00:00:37.787 00:00:37.787 real 0m0.001s 00:00:37.787 user 0m0.000s 00:00:37.787 sys 0m0.000s 00:00:37.787 11:42:23 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:37.787 11:42:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:37.787 ************************************ 00:00:37.787 END TEST ubsan 00:00:37.787 ************************************ 00:00:38.045 11:42:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:38.045 11:42:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:38.045 11:42:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:38.045 11:42:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:38.045 11:42:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:38.045 11:42:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:38.045 11:42:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:38.045 11:42:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:38.045 11:42:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:38.045 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:38.045 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:38.613 Using 'verbs' RDMA provider 00:00:54.889 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.761 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.761 Creating mk/config.mk...done. 00:01:09.761 Creating mk/cc.flags.mk...done. 00:01:09.761 Type 'make' to build. 00:01:09.761 11:42:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:09.761 11:42:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:09.761 11:42:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:09.761 11:42:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.761 ************************************ 00:01:09.761 START TEST make 00:01:09.761 ************************************ 00:01:09.761 11:42:55 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:10.020 make[1]: Nothing to be done for 'all'. 00:01:48.744 The Meson build system 00:01:48.744 Version: 1.3.1 00:01:48.744 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:48.744 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:48.744 Build type: native build 00:01:48.744 Program cat found: YES (/usr/bin/cat) 00:01:48.744 Project name: DPDK 00:01:48.744 Project version: 24.03.0 00:01:48.744 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.744 C linker for the host machine: cc ld.bfd 2.39-16 00:01:48.744 Host machine cpu family: x86_64 00:01:48.744 Host machine cpu: x86_64 00:01:48.744 Message: ## Building in Developer Mode ## 00:01:48.744 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.744 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.744 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.744 Program python3 found: YES (/usr/bin/python3) 00:01:48.744 Program cat found: YES (/usr/bin/cat) 00:01:48.744 Compiler for C supports arguments -march=native: YES 00:01:48.744 Checking for size of "void *" : 8 00:01:48.744 Checking for size of "void *" : 8 (cached) 00:01:48.744 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:48.744 Library m found: YES 00:01:48.744 Library numa found: YES 00:01:48.744 Has header "numaif.h" : YES 00:01:48.744 Library fdt found: NO 00:01:48.744 Library execinfo found: NO 00:01:48.744 Has header "execinfo.h" : YES 00:01:48.744 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.744 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.744 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.744 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.744 Run-time dependency openssl found: YES 3.0.9 00:01:48.744 Run-time dependency libpcap found: YES 1.10.4 00:01:48.744 Has header "pcap.h" with dependency libpcap: YES 00:01:48.744 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.744 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.744 Compiler for C supports arguments -Wformat: YES 00:01:48.744 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.744 Compiler for C supports arguments -Wformat-security: NO 00:01:48.744 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.744 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.744 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.744 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.744 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.744 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.744 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.744 Compiler for C supports arguments -Wundef: YES 00:01:48.744 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.744 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.744 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.744 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.744 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.744 Program objdump found: YES (/usr/bin/objdump) 00:01:48.744 Compiler for C supports arguments -mavx512f: YES 00:01:48.744 Checking if "AVX512 checking" compiles: YES 00:01:48.744 Fetching value of define "__SSE4_2__" : 1 00:01:48.744 Fetching value of define "__AES__" : 1 00:01:48.744 Fetching value of define "__AVX__" : 1 00:01:48.744 Fetching value of define "__AVX2__" : 1 00:01:48.744 Fetching value of define "__AVX512BW__" : 1 00:01:48.744 Fetching value of define "__AVX512CD__" : 1 00:01:48.744 Fetching value of define "__AVX512DQ__" : 1 00:01:48.744 Fetching value of define "__AVX512F__" : 1 00:01:48.744 Fetching value of define "__AVX512VL__" : 1 00:01:48.744 Fetching value of define "__PCLMUL__" : 1 00:01:48.744 Fetching value of define "__RDRND__" : 1 00:01:48.744 Fetching value of define "__RDSEED__" : 1 00:01:48.744 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.744 Fetching value of define "__znver1__" : (undefined) 00:01:48.744 Fetching value of define "__znver2__" : (undefined) 00:01:48.744 Fetching value of define "__znver3__" : (undefined) 00:01:48.744 Fetching value of define "__znver4__" : (undefined) 00:01:48.744 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.744 Message: lib/log: Defining dependency "log" 00:01:48.744 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.744 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.744 Checking for function "getentropy" : NO 00:01:48.744 Message: lib/eal: Defining dependency "eal" 00:01:48.744 Message: lib/ring: Defining dependency "ring" 00:01:48.744 Message: lib/rcu: Defining dependency "rcu" 00:01:48.744 Message: lib/mempool: Defining dependency "mempool" 00:01:48.744 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.744 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.744 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.744 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.744 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.744 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.744 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.744 Compiler for C supports arguments -mpclmul: YES 00:01:48.744 Compiler for C supports arguments -maes: YES 00:01:48.744 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.744 Compiler for C supports arguments -mavx512bw: YES 00:01:48.744 Compiler for C supports arguments -mavx512dq: YES 00:01:48.744 Compiler for C supports arguments -mavx512vl: YES 00:01:48.744 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.744 Compiler for C supports arguments -mavx2: YES 00:01:48.744 Compiler for C supports arguments -mavx: YES 00:01:48.744 Message: lib/net: Defining dependency "net" 00:01:48.744 Message: lib/meter: Defining dependency "meter" 00:01:48.744 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.744 Message: lib/pci: Defining dependency "pci" 00:01:48.744 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.744 Message: lib/hash: Defining dependency "hash" 00:01:48.744 Message: lib/timer: Defining dependency "timer" 00:01:48.744 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.744 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.744 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.744 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.744 Message: lib/power: Defining dependency "power" 00:01:48.744 Message: lib/reorder: Defining dependency "reorder" 00:01:48.744 Message: lib/security: Defining dependency "security" 00:01:48.744 Has header "linux/userfaultfd.h" : YES 00:01:48.744 Has header "linux/vduse.h" : YES 00:01:48.744 Message: lib/vhost: Defining dependency "vhost" 00:01:48.744 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.744 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:48.744 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.744 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.744 Compiler for C supports arguments -std=c11: YES 00:01:48.744 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:48.744 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:48.744 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:48.744 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:48.744 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:48.744 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:48.744 Library mtcr_ul found: NO 00:01:48.744 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:48.744 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:48.744 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:48.745 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:48.745 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:48.745 Configuring mlx5_autoconf.h using configuration 00:01:48.745 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:48.745 Run-time dependency libcrypto found: YES 3.0.9 00:01:48.745 Library IPSec_MB found: YES 00:01:48.745 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:48.745 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:48.745 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.745 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.745 Library IPSec_MB found: YES 00:01:48.745 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:48.745 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:48.745 Compiler for C supports arguments -std=c11: YES (cached) 00:01:48.745 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:48.745 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:48.745 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:48.745 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:48.745 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:48.745 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:48.745 Library libisal found: NO 00:01:48.745 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:48.745 Compiler for C supports arguments -std=c11: YES (cached) 00:01:48.745 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:48.745 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:48.745 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:48.745 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:48.745 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:48.745 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.745 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.745 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.745 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.745 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.745 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.745 Configuring doxy-api-html.conf using configuration 00:01:48.745 Configuring doxy-api-man.conf using configuration 00:01:48.745 Program mandb found: YES (/usr/bin/mandb) 00:01:48.745 Program sphinx-build found: NO 00:01:48.745 Configuring rte_build_config.h using configuration 00:01:48.745 Message: 00:01:48.745 ================= 00:01:48.745 Applications Enabled 00:01:48.745 ================= 00:01:48.745 00:01:48.745 apps: 00:01:48.745 00:01:48.745 00:01:48.745 Message: 00:01:48.745 ================= 00:01:48.745 Libraries Enabled 00:01:48.745 ================= 00:01:48.745 00:01:48.745 libs: 00:01:48.745 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.745 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.745 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.745 00:01:48.745 Message: 00:01:48.745 =============== 00:01:48.745 Drivers Enabled 00:01:48.745 =============== 00:01:48.745 00:01:48.745 common: 00:01:48.745 mlx5, qat, 00:01:48.745 bus: 00:01:48.745 auxiliary, pci, vdev, 00:01:48.745 mempool: 00:01:48.745 ring, 00:01:48.745 dma: 00:01:48.745 00:01:48.745 net: 00:01:48.745 00:01:48.745 crypto: 00:01:48.745 ipsec_mb, mlx5, 00:01:48.745 compress: 00:01:48.745 isal, mlx5, 00:01:48.745 vdpa: 00:01:48.745 00:01:48.745 00:01:48.745 Message: 00:01:48.745 ================= 00:01:48.745 Content Skipped 00:01:48.745 ================= 00:01:48.745 00:01:48.745 apps: 00:01:48.745 dumpcap: explicitly disabled via build config 00:01:48.745 graph: explicitly disabled via build config 00:01:48.745 pdump: explicitly disabled via build config 00:01:48.745 proc-info: explicitly disabled via build config 00:01:48.745 test-acl: explicitly disabled via build config 00:01:48.745 test-bbdev: explicitly disabled via build config 00:01:48.745 test-cmdline: explicitly disabled via build config 00:01:48.745 test-compress-perf: explicitly disabled via build config 00:01:48.745 test-crypto-perf: explicitly disabled via build config 00:01:48.745 test-dma-perf: explicitly disabled via build config 00:01:48.745 test-eventdev: explicitly disabled via build config 00:01:48.745 test-fib: explicitly disabled via build config 00:01:48.745 test-flow-perf: explicitly disabled via build config 00:01:48.745 test-gpudev: explicitly disabled via build config 00:01:48.745 test-mldev: explicitly disabled via build config 00:01:48.746 test-pipeline: explicitly disabled via build config 00:01:48.746 test-pmd: explicitly disabled via build config 00:01:48.746 test-regex: explicitly disabled via build config 00:01:48.746 test-sad: explicitly disabled via build config 00:01:48.746 test-security-perf: explicitly disabled via build config 00:01:48.746 00:01:48.746 libs: 00:01:48.746 argparse: explicitly disabled via build config 00:01:48.746 metrics: explicitly disabled via build config 00:01:48.746 acl: explicitly disabled via build config 00:01:48.746 bbdev: explicitly disabled via build config 00:01:48.746 bitratestats: explicitly disabled via build config 00:01:48.746 bpf: explicitly disabled via build config 00:01:48.746 cfgfile: explicitly disabled via build config 00:01:48.746 distributor: explicitly disabled via build config 00:01:48.746 efd: explicitly disabled via build config 00:01:48.746 eventdev: explicitly disabled via build config 00:01:48.746 dispatcher: explicitly disabled via build config 00:01:48.746 gpudev: explicitly disabled via build config 00:01:48.746 gro: explicitly disabled via build config 00:01:48.746 gso: explicitly disabled via build config 00:01:48.746 ip_frag: explicitly disabled via build config 00:01:48.746 jobstats: explicitly disabled via build config 00:01:48.746 latencystats: explicitly disabled via build config 00:01:48.746 lpm: explicitly disabled via build config 00:01:48.746 member: explicitly disabled via build config 00:01:48.746 pcapng: explicitly disabled via build config 00:01:48.746 rawdev: explicitly disabled via build config 00:01:48.746 regexdev: explicitly disabled via build config 00:01:48.746 mldev: explicitly disabled via build config 00:01:48.746 rib: explicitly disabled via build config 00:01:48.746 sched: explicitly disabled via build config 00:01:48.746 stack: explicitly disabled via build config 00:01:48.746 ipsec: explicitly disabled via build config 00:01:48.746 pdcp: explicitly disabled via build config 00:01:48.746 fib: explicitly disabled via build config 00:01:48.746 port: explicitly disabled via build config 00:01:48.746 pdump: explicitly disabled via build config 00:01:48.746 table: explicitly disabled via build config 00:01:48.746 pipeline: explicitly disabled via build config 00:01:48.746 graph: explicitly disabled via build config 00:01:48.746 node: explicitly disabled via build config 00:01:48.746 00:01:48.746 drivers: 00:01:48.746 common/cpt: not in enabled drivers build config 00:01:48.746 common/dpaax: not in enabled drivers build config 00:01:48.746 common/iavf: not in enabled drivers build config 00:01:48.746 common/idpf: not in enabled drivers build config 00:01:48.746 common/ionic: not in enabled drivers build config 00:01:48.746 common/mvep: not in enabled drivers build config 00:01:48.746 common/octeontx: not in enabled drivers build config 00:01:48.746 bus/cdx: not in enabled drivers build config 00:01:48.746 bus/dpaa: not in enabled drivers build config 00:01:48.746 bus/fslmc: not in enabled drivers build config 00:01:48.746 bus/ifpga: not in enabled drivers build config 00:01:48.746 bus/platform: not in enabled drivers build config 00:01:48.746 bus/uacce: not in enabled drivers build config 00:01:48.746 bus/vmbus: not in enabled drivers build config 00:01:48.746 common/cnxk: not in enabled drivers build config 00:01:48.746 common/nfp: not in enabled drivers build config 00:01:48.746 common/nitrox: not in enabled drivers build config 00:01:48.746 common/sfc_efx: not in enabled drivers build config 00:01:48.746 mempool/bucket: not in enabled drivers build config 00:01:48.746 mempool/cnxk: not in enabled drivers build config 00:01:48.746 mempool/dpaa: not in enabled drivers build config 00:01:48.746 mempool/dpaa2: not in enabled drivers build config 00:01:48.746 mempool/octeontx: not in enabled drivers build config 00:01:48.746 mempool/stack: not in enabled drivers build config 00:01:48.746 dma/cnxk: not in enabled drivers build config 00:01:48.746 dma/dpaa: not in enabled drivers build config 00:01:48.746 dma/dpaa2: not in enabled drivers build config 00:01:48.746 dma/hisilicon: not in enabled drivers build config 00:01:48.746 dma/idxd: not in enabled drivers build config 00:01:48.746 dma/ioat: not in enabled drivers build config 00:01:48.746 dma/skeleton: not in enabled drivers build config 00:01:48.746 net/af_packet: not in enabled drivers build config 00:01:48.746 net/af_xdp: not in enabled drivers build config 00:01:48.746 net/ark: not in enabled drivers build config 00:01:48.746 net/atlantic: not in enabled drivers build config 00:01:48.746 net/avp: not in enabled drivers build config 00:01:48.746 net/axgbe: not in enabled drivers build config 00:01:48.746 net/bnx2x: not in enabled drivers build config 00:01:48.746 net/bnxt: not in enabled drivers build config 00:01:48.746 net/bonding: not in enabled drivers build config 00:01:48.746 net/cnxk: not in enabled drivers build config 00:01:48.746 net/cpfl: not in enabled drivers build config 00:01:48.746 net/cxgbe: not in enabled drivers build config 00:01:48.746 net/dpaa: not in enabled drivers build config 00:01:48.746 net/dpaa2: not in enabled drivers build config 00:01:48.746 net/e1000: not in enabled drivers build config 00:01:48.746 net/ena: not in enabled drivers build config 00:01:48.746 net/enetc: not in enabled drivers build config 00:01:48.746 net/enetfec: not in enabled drivers build config 00:01:48.746 net/enic: not in enabled drivers build config 00:01:48.746 net/failsafe: not in enabled drivers build config 00:01:48.746 net/fm10k: not in enabled drivers build config 00:01:48.746 net/gve: not in enabled drivers build config 00:01:48.746 net/hinic: not in enabled drivers build config 00:01:48.746 net/hns3: not in enabled drivers build config 00:01:48.746 net/i40e: not in enabled drivers build config 00:01:48.746 net/iavf: not in enabled drivers build config 00:01:48.746 net/ice: not in enabled drivers build config 00:01:48.746 net/idpf: not in enabled drivers build config 00:01:48.746 net/igc: not in enabled drivers build config 00:01:48.746 net/ionic: not in enabled drivers build config 00:01:48.746 net/ipn3ke: not in enabled drivers build config 00:01:48.746 net/ixgbe: not in enabled drivers build config 00:01:48.746 net/mana: not in enabled drivers build config 00:01:48.746 net/memif: not in enabled drivers build config 00:01:48.746 net/mlx4: not in enabled drivers build config 00:01:48.746 net/mlx5: not in enabled drivers build config 00:01:48.746 net/mvneta: not in enabled drivers build config 00:01:48.746 net/mvpp2: not in enabled drivers build config 00:01:48.746 net/netvsc: not in enabled drivers build config 00:01:48.746 net/nfb: not in enabled drivers build config 00:01:48.746 net/nfp: not in enabled drivers build config 00:01:48.746 net/ngbe: not in enabled drivers build config 00:01:48.746 net/null: not in enabled drivers build config 00:01:48.746 net/octeontx: not in enabled drivers build config 00:01:48.746 net/octeon_ep: not in enabled drivers build config 00:01:48.746 net/pcap: not in enabled drivers build config 00:01:48.746 net/pfe: not in enabled drivers build config 00:01:48.746 net/qede: not in enabled drivers build config 00:01:48.746 net/ring: not in enabled drivers build config 00:01:48.746 net/sfc: not in enabled drivers build config 00:01:48.746 net/softnic: not in enabled drivers build config 00:01:48.746 net/tap: not in enabled drivers build config 00:01:48.746 net/thunderx: not in enabled drivers build config 00:01:48.746 net/txgbe: not in enabled drivers build config 00:01:48.746 net/vdev_netvsc: not in enabled drivers build config 00:01:48.746 net/vhost: not in enabled drivers build config 00:01:48.746 net/virtio: not in enabled drivers build config 00:01:48.746 net/vmxnet3: not in enabled drivers build config 00:01:48.746 raw/*: missing internal dependency, "rawdev" 00:01:48.746 crypto/armv8: not in enabled drivers build config 00:01:48.746 crypto/bcmfs: not in enabled drivers build config 00:01:48.746 crypto/caam_jr: not in enabled drivers build config 00:01:48.746 crypto/ccp: not in enabled drivers build config 00:01:48.746 crypto/cnxk: not in enabled drivers build config 00:01:48.746 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.746 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.746 crypto/mvsam: not in enabled drivers build config 00:01:48.746 crypto/nitrox: not in enabled drivers build config 00:01:48.746 crypto/null: not in enabled drivers build config 00:01:48.746 crypto/octeontx: not in enabled drivers build config 00:01:48.746 crypto/openssl: not in enabled drivers build config 00:01:48.746 crypto/scheduler: not in enabled drivers build config 00:01:48.746 crypto/uadk: not in enabled drivers build config 00:01:48.746 crypto/virtio: not in enabled drivers build config 00:01:48.746 compress/nitrox: not in enabled drivers build config 00:01:48.746 compress/octeontx: not in enabled drivers build config 00:01:48.746 compress/zlib: not in enabled drivers build config 00:01:48.746 regex/*: missing internal dependency, "regexdev" 00:01:48.746 ml/*: missing internal dependency, "mldev" 00:01:48.746 vdpa/ifc: not in enabled drivers build config 00:01:48.746 vdpa/mlx5: not in enabled drivers build config 00:01:48.746 vdpa/nfp: not in enabled drivers build config 00:01:48.746 vdpa/sfc: not in enabled drivers build config 00:01:48.746 event/*: missing internal dependency, "eventdev" 00:01:48.746 baseband/*: missing internal dependency, "bbdev" 00:01:48.746 gpu/*: missing internal dependency, "gpudev" 00:01:48.746 00:01:48.746 00:01:49.682 Build targets in project: 115 00:01:49.682 00:01:49.682 DPDK 24.03.0 00:01:49.682 00:01:49.682 User defined options 00:01:49.682 buildtype : debug 00:01:49.682 default_library : shared 00:01:49.682 libdir : lib 00:01:49.682 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:49.682 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:49.682 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:49.682 cpu_instruction_set: native 00:01:49.682 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:49.682 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:49.682 enable_docs : false 00:01:49.682 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:49.682 enable_kmods : false 00:01:49.682 max_lcores : 128 00:01:49.682 tests : false 00:01:49.682 00:01:49.682 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.259 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.259 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.259 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.259 [3/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.523 [4/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.523 [5/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.523 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.523 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.523 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.523 [9/378] Linking static target lib/librte_kvargs.a 00:01:50.523 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.523 [11/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.523 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.523 [13/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.523 [14/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.523 [15/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.523 [16/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.523 [17/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.523 [18/378] Linking static target lib/librte_log.a 00:01:50.523 [19/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.523 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.523 [21/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.523 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.523 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.523 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.523 [25/378] Linking static target lib/librte_pci.a 00:01:50.786 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.786 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.786 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.786 [29/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:50.786 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.786 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.786 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.786 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:50.786 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.051 [35/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.051 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.051 [37/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.051 [38/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.051 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.051 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.051 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.051 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.051 [43/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.051 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.051 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.051 [46/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.051 [47/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.051 [48/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.051 [49/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.051 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.051 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.051 [52/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.051 [53/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.051 [54/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.051 [55/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.051 [56/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.051 [57/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.051 [58/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.051 [59/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.051 [60/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.051 [61/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.051 [62/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.051 [63/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.051 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.051 [65/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.051 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.051 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.051 [68/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.051 [69/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.051 [70/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.051 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.051 [72/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.051 [73/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.051 [74/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.051 [75/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.051 [76/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.051 [77/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.051 [78/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.051 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.051 [80/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.051 [81/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.051 [82/378] Linking static target lib/librte_meter.a 00:01:51.051 [83/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.051 [84/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.051 [85/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.317 [86/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.317 [87/378] Linking static target lib/librte_telemetry.a 00:01:51.317 [88/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.317 [89/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.317 [90/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.317 [91/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.317 [92/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.317 [93/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.317 [94/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:51.317 [95/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.317 [96/378] Linking static target lib/librte_ring.a 00:01:51.317 [97/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.317 [98/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.317 [99/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.317 [100/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.317 [101/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.317 [102/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.317 [103/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.317 [104/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.317 [105/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.317 [106/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.317 [107/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:51.317 [108/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.317 [109/378] Linking static target lib/librte_cmdline.a 00:01:51.317 [110/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.317 [111/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.317 [112/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.317 [113/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.317 [114/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.317 [115/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.317 [116/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:51.317 [117/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.317 [118/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.317 [119/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.317 [120/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.317 [121/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.317 [122/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.317 [123/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.317 [124/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.317 [125/378] Linking static target lib/librte_rcu.a 00:01:51.317 [126/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.317 [127/378] Linking static target lib/librte_net.a 00:01:51.578 [128/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.578 [129/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.578 [130/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.578 [131/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.578 [132/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.578 [133/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.578 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.578 [135/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.578 [136/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:51.578 [137/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.578 [138/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.578 [139/378] Linking static target lib/librte_timer.a 00:01:51.578 [140/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.578 [141/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.578 [142/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.578 [143/378] Linking static target lib/librte_mempool.a 00:01:51.578 [144/378] Linking static target lib/librte_dmadev.a 00:01:51.578 [145/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.578 [146/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.578 [147/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.578 [148/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.578 [149/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.578 [150/378] Linking static target lib/librte_eal.a 00:01:51.578 [151/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.578 [152/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:51.578 [153/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.578 [154/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:51.578 [155/378] Linking static target lib/librte_compressdev.a 00:01:51.578 [156/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.837 [157/378] Linking static target lib/librte_mbuf.a 00:01:51.837 [158/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.837 [159/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.837 [160/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.837 [161/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:51.837 [162/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:51.837 [163/378] Linking target lib/librte_log.so.24.1 00:01:51.837 [164/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.837 [165/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.837 [166/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:51.837 [167/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:51.837 [168/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:51.837 [169/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:51.837 [170/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.837 [171/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.837 [172/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.837 [173/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.837 [174/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:51.837 [175/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.837 [176/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.837 [177/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.837 [178/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.837 [179/378] Linking static target lib/librte_hash.a 00:01:52.096 [180/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.096 [181/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.096 [182/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.096 [183/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.096 [184/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.096 [185/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.096 [186/378] Linking static target lib/librte_reorder.a 00:01:52.096 [187/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:52.096 [188/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:52.096 [189/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:52.096 [190/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:52.096 [191/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.096 [192/378] Linking target lib/librte_kvargs.so.24.1 00:01:52.096 [193/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.096 [194/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.096 [195/378] Linking target lib/librte_telemetry.so.24.1 00:01:52.096 [196/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.096 [197/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.096 [198/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:52.096 [199/378] Linking static target lib/librte_power.a 00:01:52.096 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:52.096 [201/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:52.096 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:52.096 [203/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.096 [204/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.096 [205/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:52.096 [206/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:52.096 [207/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.096 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:52.096 [209/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:52.096 [210/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.096 [211/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.096 [212/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:52.096 [213/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:52.096 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:52.096 [215/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:52.096 [216/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:52.096 [217/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:52.096 [218/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.096 [219/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:52.096 [220/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.096 [221/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.096 [222/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:52.096 [223/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.096 [224/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:52.096 [225/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:52.096 [226/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:52.096 [227/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.096 [228/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:52.096 [229/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:52.096 [230/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.354 [231/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.354 [232/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:52.354 [233/378] Linking static target drivers/librte_bus_vdev.a 00:01:52.354 [234/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.354 [235/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:52.354 [236/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.354 [237/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:52.354 [238/378] Linking static target lib/librte_cryptodev.a 00:01:52.354 [239/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:52.354 [240/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:52.354 [241/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:52.354 [242/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:52.354 [243/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:52.354 [244/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:52.354 [245/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:52.354 [246/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:52.354 [247/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:52.354 [248/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:52.354 [249/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:52.354 [250/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.354 [251/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:52.354 [252/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:52.354 [253/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.354 [254/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:52.354 [255/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:52.354 [256/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.354 [257/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:52.354 [258/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.354 [259/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:52.354 [260/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.354 [261/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.354 [262/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:52.354 [263/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:52.354 [264/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.354 [265/378] Linking static target drivers/librte_bus_pci.a 00:01:52.354 [266/378] Linking static target drivers/librte_mempool_ring.a 00:01:52.354 [267/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:52.354 [268/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:52.354 [269/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:52.354 [270/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.354 [271/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:52.354 [272/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:52.354 [273/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.354 [274/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:52.354 [275/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:52.706 [276/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [277/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.706 [278/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:52.706 [279/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [280/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:52.706 [281/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:52.706 [282/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:52.706 [283/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:52.706 [284/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:52.706 [285/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:52.706 [286/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [287/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.706 [288/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:52.706 [289/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:52.706 [290/378] Linking static target lib/librte_ethdev.a 00:01:52.706 [291/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:52.706 [292/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:52.706 [293/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:52.706 [294/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:52.706 [295/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:52.706 [296/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:52.706 [297/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:52.706 [298/378] Linking static target drivers/librte_compress_isal.a 00:01:52.706 [299/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [300/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:52.706 [301/378] Linking static target drivers/librte_compress_mlx5.a 00:01:52.967 [302/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:52.967 [303/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:52.967 [304/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:52.967 [305/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:52.967 [306/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:52.967 [307/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.967 [308/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.967 [309/378] Linking static target lib/librte_security.a 00:01:52.967 [310/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:52.967 [311/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:52.967 [312/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.967 [313/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:52.967 [314/378] Linking static target drivers/librte_common_mlx5.a 00:01:53.224 [315/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.224 [316/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.224 [317/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:53.482 [318/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:53.740 [319/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.740 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:53.740 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:53.740 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:53.740 [323/378] Linking static target drivers/librte_common_qat.a 00:01:54.306 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.306 [325/378] Linking static target lib/librte_vhost.a 00:01:54.564 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.091 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.613 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.893 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.794 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.794 [331/378] Linking target lib/librte_eal.so.24.1 00:02:05.053 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.053 [333/378] Linking target lib/librte_ring.so.24.1 00:02:05.053 [334/378] Linking target lib/librte_meter.so.24.1 00:02:05.053 [335/378] Linking target lib/librte_pci.so.24.1 00:02:05.053 [336/378] Linking target lib/librte_timer.so.24.1 00:02:05.053 [337/378] Linking target lib/librte_dmadev.so.24.1 00:02:05.053 [338/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.053 [339/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:05.053 [340/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.053 [341/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:05.053 [342/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.053 [343/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.053 [344/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.053 [345/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.312 [346/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:05.312 [347/378] Linking target lib/librte_rcu.so.24.1 00:02:05.312 [348/378] Linking target lib/librte_mempool.so.24.1 00:02:05.312 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.312 [350/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.312 [351/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:05.572 [352/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.572 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.572 [354/378] Linking target lib/librte_mbuf.so.24.1 00:02:05.572 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.831 [356/378] Linking target lib/librte_compressdev.so.24.1 00:02:05.831 [357/378] Linking target lib/librte_reorder.so.24.1 00:02:05.831 [358/378] Linking target lib/librte_net.so.24.1 00:02:05.831 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:02:05.831 [360/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.831 [361/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:05.831 [362/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.831 [363/378] Linking target lib/librte_cmdline.so.24.1 00:02:05.831 [364/378] Linking target lib/librte_hash.so.24.1 00:02:06.090 [365/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:06.090 [366/378] Linking target lib/librte_security.so.24.1 00:02:06.090 [367/378] Linking target lib/librte_ethdev.so.24.1 00:02:06.090 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.090 [369/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.090 [370/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:06.090 [371/378] Linking target lib/librte_power.so.24.1 00:02:06.090 [372/378] Linking target lib/librte_vhost.so.24.1 00:02:06.349 [373/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:06.349 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:06.349 [375/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:06.349 [376/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:06.608 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:06.608 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:02:06.608 INFO: autodetecting backend as ninja 00:02:06.608 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:07.987 CC lib/log/log.o 00:02:07.987 CC lib/log/log_flags.o 00:02:07.987 CC lib/log/log_deprecated.o 00:02:07.987 CC lib/ut_mock/mock.o 00:02:07.987 CC lib/ut/ut.o 00:02:07.987 LIB libspdk_log.a 00:02:07.987 LIB libspdk_ut_mock.a 00:02:07.987 LIB libspdk_ut.a 00:02:07.987 SO libspdk_log.so.7.0 00:02:07.988 SO libspdk_ut_mock.so.6.0 00:02:07.988 SO libspdk_ut.so.2.0 00:02:07.988 SYMLINK libspdk_log.so 00:02:07.988 SYMLINK libspdk_ut_mock.so 00:02:08.247 SYMLINK libspdk_ut.so 00:02:08.505 CXX lib/trace_parser/trace.o 00:02:08.505 CC lib/util/base64.o 00:02:08.505 CC lib/util/cpuset.o 00:02:08.505 CC lib/util/bit_array.o 00:02:08.505 CC lib/util/crc16.o 00:02:08.505 CC lib/util/crc32.o 00:02:08.505 CC lib/util/crc32c.o 00:02:08.505 CC lib/dma/dma.o 00:02:08.505 CC lib/util/crc32_ieee.o 00:02:08.505 CC lib/util/fd.o 00:02:08.505 CC lib/util/crc64.o 00:02:08.505 CC lib/util/dif.o 00:02:08.505 CC lib/ioat/ioat.o 00:02:08.505 CC lib/util/fd_group.o 00:02:08.505 CC lib/util/file.o 00:02:08.505 CC lib/util/hexlify.o 00:02:08.505 CC lib/util/iov.o 00:02:08.505 CC lib/util/math.o 00:02:08.505 CC lib/util/net.o 00:02:08.505 CC lib/util/pipe.o 00:02:08.505 CC lib/util/strerror_tls.o 00:02:08.505 CC lib/util/string.o 00:02:08.505 CC lib/util/uuid.o 00:02:08.505 CC lib/util/xor.o 00:02:08.505 CC lib/util/zipf.o 00:02:08.505 CC lib/vfio_user/host/vfio_user_pci.o 00:02:08.505 CC lib/vfio_user/host/vfio_user.o 00:02:08.764 LIB libspdk_dma.a 00:02:08.764 SO libspdk_dma.so.4.0 00:02:08.764 LIB libspdk_ioat.a 00:02:08.764 SO libspdk_ioat.so.7.0 00:02:08.764 SYMLINK libspdk_dma.so 00:02:08.764 SYMLINK libspdk_ioat.so 00:02:08.764 LIB libspdk_vfio_user.a 00:02:09.023 SO libspdk_vfio_user.so.5.0 00:02:09.023 LIB libspdk_util.a 00:02:09.023 SYMLINK libspdk_vfio_user.so 00:02:09.023 SO libspdk_util.so.10.0 00:02:09.319 SYMLINK libspdk_util.so 00:02:09.319 LIB libspdk_trace_parser.a 00:02:09.319 SO libspdk_trace_parser.so.5.0 00:02:09.584 SYMLINK libspdk_trace_parser.so 00:02:09.584 CC lib/rdma_provider/common.o 00:02:09.584 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:09.584 CC lib/json/json_parse.o 00:02:09.584 CC lib/json/json_util.o 00:02:09.584 CC lib/json/json_write.o 00:02:09.584 CC lib/conf/conf.o 00:02:09.584 CC lib/env_dpdk/init.o 00:02:09.584 CC lib/env_dpdk/env.o 00:02:09.584 CC lib/env_dpdk/memory.o 00:02:09.584 CC lib/env_dpdk/pci.o 00:02:09.584 CC lib/vmd/vmd.o 00:02:09.584 CC lib/env_dpdk/threads.o 00:02:09.584 CC lib/idxd/idxd.o 00:02:09.584 CC lib/env_dpdk/pci_ioat.o 00:02:09.584 CC lib/vmd/led.o 00:02:09.584 CC lib/idxd/idxd_user.o 00:02:09.584 CC lib/env_dpdk/pci_virtio.o 00:02:09.584 CC lib/idxd/idxd_kernel.o 00:02:09.584 CC lib/env_dpdk/pci_vmd.o 00:02:09.584 CC lib/env_dpdk/pci_idxd.o 00:02:09.584 CC lib/env_dpdk/sigbus_handler.o 00:02:09.584 CC lib/env_dpdk/pci_event.o 00:02:09.584 CC lib/env_dpdk/pci_dpdk.o 00:02:09.584 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.584 CC lib/rdma_utils/rdma_utils.o 00:02:09.584 CC lib/reduce/reduce.o 00:02:09.584 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.843 LIB libspdk_conf.a 00:02:09.843 SO libspdk_conf.so.6.0 00:02:09.843 LIB libspdk_json.a 00:02:09.843 LIB libspdk_rdma_utils.a 00:02:10.101 SYMLINK libspdk_conf.so 00:02:10.101 SO libspdk_json.so.6.0 00:02:10.101 SO libspdk_rdma_utils.so.1.0 00:02:10.101 LIB libspdk_rdma_provider.a 00:02:10.101 SYMLINK libspdk_rdma_utils.so 00:02:10.101 SO libspdk_rdma_provider.so.6.0 00:02:10.101 SYMLINK libspdk_json.so 00:02:10.101 SYMLINK libspdk_rdma_provider.so 00:02:10.101 LIB libspdk_idxd.a 00:02:10.360 SO libspdk_idxd.so.12.0 00:02:10.360 LIB libspdk_vmd.a 00:02:10.360 LIB libspdk_reduce.a 00:02:10.360 SYMLINK libspdk_idxd.so 00:02:10.360 SO libspdk_reduce.so.6.1 00:02:10.360 SO libspdk_vmd.so.6.0 00:02:10.360 SYMLINK libspdk_reduce.so 00:02:10.360 SYMLINK libspdk_vmd.so 00:02:10.360 CC lib/jsonrpc/jsonrpc_server.o 00:02:10.360 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:10.360 CC lib/jsonrpc/jsonrpc_client.o 00:02:10.360 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:10.618 LIB libspdk_jsonrpc.a 00:02:10.618 SO libspdk_jsonrpc.so.6.0 00:02:10.876 SYMLINK libspdk_jsonrpc.so 00:02:10.876 LIB libspdk_env_dpdk.a 00:02:10.876 SO libspdk_env_dpdk.so.15.0 00:02:11.135 SYMLINK libspdk_env_dpdk.so 00:02:11.135 CC lib/rpc/rpc.o 00:02:11.394 LIB libspdk_rpc.a 00:02:11.394 SO libspdk_rpc.so.6.0 00:02:11.652 SYMLINK libspdk_rpc.so 00:02:11.910 CC lib/notify/notify.o 00:02:11.910 CC lib/notify/notify_rpc.o 00:02:11.910 CC lib/keyring/keyring.o 00:02:11.910 CC lib/keyring/keyring_rpc.o 00:02:11.910 CC lib/trace/trace.o 00:02:11.910 CC lib/trace/trace_flags.o 00:02:11.910 CC lib/trace/trace_rpc.o 00:02:12.168 LIB libspdk_notify.a 00:02:12.168 SO libspdk_notify.so.6.0 00:02:12.168 LIB libspdk_keyring.a 00:02:12.168 LIB libspdk_trace.a 00:02:12.168 SYMLINK libspdk_notify.so 00:02:12.168 SO libspdk_keyring.so.1.0 00:02:12.168 SO libspdk_trace.so.10.0 00:02:12.168 SYMLINK libspdk_keyring.so 00:02:12.168 SYMLINK libspdk_trace.so 00:02:12.736 CC lib/thread/thread.o 00:02:12.736 CC lib/thread/iobuf.o 00:02:12.736 CC lib/sock/sock.o 00:02:12.736 CC lib/sock/sock_rpc.o 00:02:12.994 LIB libspdk_sock.a 00:02:12.994 SO libspdk_sock.so.10.0 00:02:13.252 SYMLINK libspdk_sock.so 00:02:13.510 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.510 CC lib/nvme/nvme_ctrlr.o 00:02:13.510 CC lib/nvme/nvme_fabric.o 00:02:13.510 CC lib/nvme/nvme_ns_cmd.o 00:02:13.510 CC lib/nvme/nvme_ns.o 00:02:13.510 CC lib/nvme/nvme_pcie_common.o 00:02:13.510 CC lib/nvme/nvme_pcie.o 00:02:13.510 CC lib/nvme/nvme_qpair.o 00:02:13.510 CC lib/nvme/nvme.o 00:02:13.510 CC lib/nvme/nvme_quirks.o 00:02:13.510 CC lib/nvme/nvme_transport.o 00:02:13.510 CC lib/nvme/nvme_discovery.o 00:02:13.510 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.511 CC lib/nvme/nvme_opal.o 00:02:13.511 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.511 CC lib/nvme/nvme_tcp.o 00:02:13.511 CC lib/nvme/nvme_poll_group.o 00:02:13.511 CC lib/nvme/nvme_io_msg.o 00:02:13.511 CC lib/nvme/nvme_zns.o 00:02:13.511 CC lib/nvme/nvme_stubs.o 00:02:13.511 CC lib/nvme/nvme_auth.o 00:02:13.511 CC lib/nvme/nvme_cuse.o 00:02:13.511 CC lib/nvme/nvme_rdma.o 00:02:14.079 LIB libspdk_thread.a 00:02:14.079 SO libspdk_thread.so.10.1 00:02:14.337 SYMLINK libspdk_thread.so 00:02:14.595 CC lib/init/json_config.o 00:02:14.595 CC lib/init/subsystem.o 00:02:14.595 CC lib/init/subsystem_rpc.o 00:02:14.595 CC lib/init/rpc.o 00:02:14.595 CC lib/blob/blobstore.o 00:02:14.595 CC lib/blob/request.o 00:02:14.595 CC lib/blob/zeroes.o 00:02:14.595 CC lib/blob/blob_bs_dev.o 00:02:14.595 CC lib/virtio/virtio.o 00:02:14.595 CC lib/accel/accel.o 00:02:14.595 CC lib/accel/accel_rpc.o 00:02:14.595 CC lib/virtio/virtio_vhost_user.o 00:02:14.595 CC lib/virtio/virtio_vfio_user.o 00:02:14.595 CC lib/accel/accel_sw.o 00:02:14.595 CC lib/virtio/virtio_pci.o 00:02:14.853 LIB libspdk_init.a 00:02:14.853 SO libspdk_init.so.5.0 00:02:14.853 LIB libspdk_virtio.a 00:02:14.853 SYMLINK libspdk_init.so 00:02:14.853 SO libspdk_virtio.so.7.0 00:02:15.111 SYMLINK libspdk_virtio.so 00:02:15.111 LIB libspdk_nvme.a 00:02:15.369 SO libspdk_nvme.so.13.1 00:02:15.369 CC lib/event/app.o 00:02:15.369 CC lib/event/reactor.o 00:02:15.369 CC lib/event/log_rpc.o 00:02:15.369 CC lib/event/app_rpc.o 00:02:15.369 CC lib/event/scheduler_static.o 00:02:15.628 LIB libspdk_accel.a 00:02:15.628 SO libspdk_accel.so.16.0 00:02:15.628 SYMLINK libspdk_nvme.so 00:02:15.628 SYMLINK libspdk_accel.so 00:02:15.628 LIB libspdk_event.a 00:02:15.885 SO libspdk_event.so.14.0 00:02:15.885 SYMLINK libspdk_event.so 00:02:15.885 CC lib/bdev/bdev.o 00:02:15.885 CC lib/bdev/bdev_rpc.o 00:02:15.885 CC lib/bdev/bdev_zone.o 00:02:15.885 CC lib/bdev/part.o 00:02:15.885 CC lib/bdev/scsi_nvme.o 00:02:17.261 LIB libspdk_blob.a 00:02:17.261 SO libspdk_blob.so.11.0 00:02:17.518 SYMLINK libspdk_blob.so 00:02:17.775 CC lib/lvol/lvol.o 00:02:17.775 CC lib/blobfs/blobfs.o 00:02:17.775 CC lib/blobfs/tree.o 00:02:18.342 LIB libspdk_bdev.a 00:02:18.601 SO libspdk_bdev.so.16.0 00:02:18.601 SYMLINK libspdk_bdev.so 00:02:18.601 LIB libspdk_blobfs.a 00:02:18.601 SO libspdk_blobfs.so.10.0 00:02:18.601 LIB libspdk_lvol.a 00:02:18.860 SYMLINK libspdk_blobfs.so 00:02:18.860 SO libspdk_lvol.so.10.0 00:02:18.860 SYMLINK libspdk_lvol.so 00:02:18.860 CC lib/nvmf/ctrlr.o 00:02:18.860 CC lib/nvmf/ctrlr_discovery.o 00:02:18.860 CC lib/nvmf/ctrlr_bdev.o 00:02:18.860 CC lib/nvmf/subsystem.o 00:02:18.860 CC lib/nvmf/nvmf.o 00:02:18.860 CC lib/nvmf/nvmf_rpc.o 00:02:18.860 CC lib/nvmf/tcp.o 00:02:18.860 CC lib/nvmf/transport.o 00:02:18.860 CC lib/nvmf/stubs.o 00:02:18.860 CC lib/nvmf/mdns_server.o 00:02:18.860 CC lib/nvmf/rdma.o 00:02:18.860 CC lib/nvmf/auth.o 00:02:18.860 CC lib/scsi/dev.o 00:02:18.860 CC lib/scsi/lun.o 00:02:18.860 CC lib/scsi/port.o 00:02:18.860 CC lib/scsi/scsi.o 00:02:18.860 CC lib/scsi/scsi_bdev.o 00:02:18.860 CC lib/scsi/task.o 00:02:18.860 CC lib/scsi/scsi_pr.o 00:02:18.860 CC lib/scsi/scsi_rpc.o 00:02:18.860 CC lib/ublk/ublk.o 00:02:18.860 CC lib/ublk/ublk_rpc.o 00:02:18.860 CC lib/nbd/nbd_rpc.o 00:02:18.860 CC lib/nbd/nbd.o 00:02:18.860 CC lib/ftl/ftl_core.o 00:02:18.860 CC lib/ftl/ftl_layout.o 00:02:18.860 CC lib/ftl/ftl_init.o 00:02:18.860 CC lib/ftl/ftl_debug.o 00:02:18.860 CC lib/ftl/ftl_io.o 00:02:18.860 CC lib/ftl/ftl_sb.o 00:02:18.860 CC lib/ftl/ftl_l2p.o 00:02:18.860 CC lib/ftl/ftl_l2p_flat.o 00:02:18.860 CC lib/ftl/ftl_nv_cache.o 00:02:18.860 CC lib/ftl/ftl_band.o 00:02:18.860 CC lib/ftl/ftl_band_ops.o 00:02:18.860 CC lib/ftl/ftl_writer.o 00:02:18.860 CC lib/ftl/ftl_reloc.o 00:02:18.860 CC lib/ftl/ftl_rq.o 00:02:18.860 CC lib/ftl/ftl_l2p_cache.o 00:02:18.860 CC lib/ftl/ftl_p2l.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.119 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.119 CC lib/ftl/utils/ftl_conf.o 00:02:19.119 CC lib/ftl/utils/ftl_mempool.o 00:02:19.119 CC lib/ftl/utils/ftl_md.o 00:02:19.119 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.119 CC lib/ftl/utils/ftl_property.o 00:02:19.119 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.119 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.119 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.119 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.119 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:19.119 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.119 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.119 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.119 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.119 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.119 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.119 CC lib/ftl/base/ftl_base_dev.o 00:02:19.119 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.119 CC lib/ftl/ftl_trace.o 00:02:19.686 LIB libspdk_scsi.a 00:02:19.686 LIB libspdk_nbd.a 00:02:19.686 SO libspdk_scsi.so.9.0 00:02:19.686 SO libspdk_nbd.so.7.0 00:02:19.686 SYMLINK libspdk_nbd.so 00:02:19.686 SYMLINK libspdk_scsi.so 00:02:19.944 LIB libspdk_ublk.a 00:02:19.944 SO libspdk_ublk.so.3.0 00:02:19.944 SYMLINK libspdk_ublk.so 00:02:19.944 LIB libspdk_ftl.a 00:02:20.203 CC lib/vhost/vhost.o 00:02:20.203 CC lib/vhost/vhost_rpc.o 00:02:20.203 CC lib/vhost/vhost_scsi.o 00:02:20.203 CC lib/vhost/vhost_blk.o 00:02:20.203 CC lib/vhost/rte_vhost_user.o 00:02:20.203 CC lib/iscsi/conn.o 00:02:20.203 CC lib/iscsi/init_grp.o 00:02:20.203 CC lib/iscsi/iscsi.o 00:02:20.203 CC lib/iscsi/md5.o 00:02:20.203 CC lib/iscsi/param.o 00:02:20.203 CC lib/iscsi/portal_grp.o 00:02:20.203 CC lib/iscsi/tgt_node.o 00:02:20.203 CC lib/iscsi/iscsi_subsystem.o 00:02:20.203 CC lib/iscsi/iscsi_rpc.o 00:02:20.203 CC lib/iscsi/task.o 00:02:20.203 SO libspdk_ftl.so.9.0 00:02:20.770 SYMLINK libspdk_ftl.so 00:02:21.336 LIB libspdk_iscsi.a 00:02:21.336 LIB libspdk_vhost.a 00:02:21.336 SO libspdk_vhost.so.8.0 00:02:21.336 SO libspdk_iscsi.so.8.0 00:02:21.336 LIB libspdk_nvmf.a 00:02:21.336 SO libspdk_nvmf.so.19.0 00:02:21.336 SYMLINK libspdk_iscsi.so 00:02:21.595 SYMLINK libspdk_vhost.so 00:02:21.595 SYMLINK libspdk_nvmf.so 00:02:22.163 CC module/env_dpdk/env_dpdk_rpc.o 00:02:22.163 CC module/blob/bdev/blob_bdev.o 00:02:22.163 LIB libspdk_env_dpdk_rpc.a 00:02:22.163 CC module/accel/iaa/accel_iaa.o 00:02:22.163 CC module/accel/iaa/accel_iaa_rpc.o 00:02:22.163 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:22.163 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:22.163 CC module/accel/dsa/accel_dsa.o 00:02:22.163 CC module/accel/dsa/accel_dsa_rpc.o 00:02:22.163 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:22.163 CC module/scheduler/gscheduler/gscheduler.o 00:02:22.163 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:22.163 CC module/accel/ioat/accel_ioat.o 00:02:22.163 CC module/accel/error/accel_error_rpc.o 00:02:22.163 CC module/accel/error/accel_error.o 00:02:22.163 CC module/accel/ioat/accel_ioat_rpc.o 00:02:22.421 CC module/keyring/linux/keyring.o 00:02:22.421 CC module/keyring/linux/keyring_rpc.o 00:02:22.421 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:22.421 CC module/keyring/file/keyring.o 00:02:22.421 CC module/keyring/file/keyring_rpc.o 00:02:22.421 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:22.421 CC module/sock/posix/posix.o 00:02:22.421 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.421 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.421 LIB libspdk_keyring_linux.a 00:02:22.421 LIB libspdk_scheduler_gscheduler.a 00:02:22.421 LIB libspdk_keyring_file.a 00:02:22.421 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.421 LIB libspdk_accel_iaa.a 00:02:22.421 LIB libspdk_accel_error.a 00:02:22.421 LIB libspdk_accel_ioat.a 00:02:22.421 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.421 SO libspdk_keyring_linux.so.1.0 00:02:22.421 LIB libspdk_scheduler_dynamic.a 00:02:22.421 SO libspdk_keyring_file.so.1.0 00:02:22.421 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.421 SO libspdk_accel_error.so.2.0 00:02:22.421 SO libspdk_accel_iaa.so.3.0 00:02:22.700 SO libspdk_accel_ioat.so.6.0 00:02:22.700 LIB libspdk_accel_dsa.a 00:02:22.700 LIB libspdk_blob_bdev.a 00:02:22.700 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.700 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.700 SO libspdk_accel_dsa.so.5.0 00:02:22.700 SYMLINK libspdk_keyring_linux.so 00:02:22.700 SYMLINK libspdk_keyring_file.so 00:02:22.700 SO libspdk_blob_bdev.so.11.0 00:02:22.700 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.700 SYMLINK libspdk_accel_error.so 00:02:22.700 SYMLINK libspdk_accel_ioat.so 00:02:22.700 SYMLINK libspdk_accel_iaa.so 00:02:22.700 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.700 SYMLINK libspdk_accel_dsa.so 00:02:22.700 SYMLINK libspdk_blob_bdev.so 00:02:22.992 LIB libspdk_sock_posix.a 00:02:22.992 SO libspdk_sock_posix.so.6.0 00:02:23.250 SYMLINK libspdk_sock_posix.so 00:02:23.250 CC module/bdev/gpt/gpt.o 00:02:23.250 CC module/bdev/gpt/vbdev_gpt.o 00:02:23.250 CC module/bdev/null/bdev_null_rpc.o 00:02:23.250 CC module/bdev/null/bdev_null.o 00:02:23.250 CC module/bdev/delay/vbdev_delay.o 00:02:23.250 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:23.250 CC module/bdev/compress/vbdev_compress.o 00:02:23.250 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:23.250 CC module/bdev/error/vbdev_error.o 00:02:23.250 CC module/bdev/error/vbdev_error_rpc.o 00:02:23.250 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:23.250 CC module/blobfs/bdev/blobfs_bdev.o 00:02:23.250 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:23.250 CC module/bdev/lvol/vbdev_lvol.o 00:02:23.250 CC module/bdev/crypto/vbdev_crypto.o 00:02:23.250 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:23.250 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:23.250 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:23.250 CC module/bdev/nvme/bdev_nvme.o 00:02:23.250 CC module/bdev/raid/bdev_raid.o 00:02:23.250 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:23.250 CC module/bdev/nvme/nvme_rpc.o 00:02:23.250 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.250 CC module/bdev/malloc/bdev_malloc.o 00:02:23.250 CC module/bdev/passthru/vbdev_passthru.o 00:02:23.250 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.250 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.250 CC module/bdev/raid/bdev_raid_rpc.o 00:02:23.250 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.250 CC module/bdev/nvme/bdev_mdns_client.o 00:02:23.250 CC module/bdev/nvme/vbdev_opal.o 00:02:23.250 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:23.250 CC module/bdev/raid/bdev_raid_sb.o 00:02:23.250 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:23.250 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.250 CC module/bdev/raid/raid0.o 00:02:23.250 CC module/bdev/raid/raid1.o 00:02:23.250 CC module/bdev/raid/concat.o 00:02:23.250 CC module/bdev/split/vbdev_split_rpc.o 00:02:23.250 CC module/bdev/ftl/bdev_ftl.o 00:02:23.250 CC module/bdev/split/vbdev_split.o 00:02:23.250 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.250 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.250 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.250 CC module/bdev/aio/bdev_aio.o 00:02:23.250 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.250 LIB libspdk_accel_dpdk_compressdev.a 00:02:23.250 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:23.509 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:23.509 LIB libspdk_blobfs_bdev.a 00:02:23.509 SO libspdk_blobfs_bdev.so.6.0 00:02:23.509 LIB libspdk_bdev_gpt.a 00:02:23.509 LIB libspdk_bdev_error.a 00:02:23.509 LIB libspdk_bdev_split.a 00:02:23.509 LIB libspdk_bdev_null.a 00:02:23.509 LIB libspdk_accel_dpdk_cryptodev.a 00:02:23.509 SO libspdk_bdev_error.so.6.0 00:02:23.509 SYMLINK libspdk_blobfs_bdev.so 00:02:23.509 SO libspdk_bdev_gpt.so.6.0 00:02:23.509 SO libspdk_bdev_null.so.6.0 00:02:23.509 SO libspdk_bdev_split.so.6.0 00:02:23.509 LIB libspdk_bdev_aio.a 00:02:23.509 LIB libspdk_bdev_ftl.a 00:02:23.509 LIB libspdk_bdev_passthru.a 00:02:23.767 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:23.767 LIB libspdk_bdev_delay.a 00:02:23.767 SO libspdk_bdev_aio.so.6.0 00:02:23.767 LIB libspdk_bdev_zone_block.a 00:02:23.767 SO libspdk_bdev_ftl.so.6.0 00:02:23.767 SYMLINK libspdk_bdev_error.so 00:02:23.767 LIB libspdk_bdev_compress.a 00:02:23.767 SYMLINK libspdk_bdev_gpt.so 00:02:23.767 SO libspdk_bdev_passthru.so.6.0 00:02:23.767 SYMLINK libspdk_bdev_split.so 00:02:23.767 LIB libspdk_bdev_malloc.a 00:02:23.767 SYMLINK libspdk_bdev_null.so 00:02:23.767 LIB libspdk_bdev_iscsi.a 00:02:23.767 SO libspdk_bdev_delay.so.6.0 00:02:23.767 SO libspdk_bdev_zone_block.so.6.0 00:02:23.767 SO libspdk_bdev_compress.so.6.0 00:02:23.767 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:23.767 SO libspdk_bdev_malloc.so.6.0 00:02:23.767 SYMLINK libspdk_bdev_ftl.so 00:02:23.767 SYMLINK libspdk_bdev_aio.so 00:02:23.767 SO libspdk_bdev_iscsi.so.6.0 00:02:23.767 SYMLINK libspdk_bdev_passthru.so 00:02:23.767 SYMLINK libspdk_bdev_zone_block.so 00:02:23.767 SYMLINK libspdk_bdev_delay.so 00:02:23.767 SYMLINK libspdk_bdev_compress.so 00:02:23.767 LIB libspdk_bdev_lvol.a 00:02:23.767 SYMLINK libspdk_bdev_malloc.so 00:02:23.767 SYMLINK libspdk_bdev_iscsi.so 00:02:23.767 LIB libspdk_bdev_virtio.a 00:02:23.767 SO libspdk_bdev_lvol.so.6.0 00:02:23.767 SO libspdk_bdev_virtio.so.6.0 00:02:24.026 SYMLINK libspdk_bdev_lvol.so 00:02:24.026 SYMLINK libspdk_bdev_virtio.so 00:02:24.026 LIB libspdk_bdev_crypto.a 00:02:24.026 SO libspdk_bdev_crypto.so.6.0 00:02:24.026 SYMLINK libspdk_bdev_crypto.so 00:02:24.284 LIB libspdk_bdev_raid.a 00:02:24.284 SO libspdk_bdev_raid.so.6.0 00:02:24.542 SYMLINK libspdk_bdev_raid.so 00:02:25.478 LIB libspdk_bdev_nvme.a 00:02:25.478 SO libspdk_bdev_nvme.so.7.0 00:02:25.478 SYMLINK libspdk_bdev_nvme.so 00:02:26.415 CC module/event/subsystems/scheduler/scheduler.o 00:02:26.415 CC module/event/subsystems/vmd/vmd.o 00:02:26.415 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:26.415 CC module/event/subsystems/sock/sock.o 00:02:26.415 CC module/event/subsystems/iobuf/iobuf.o 00:02:26.415 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:26.415 CC module/event/subsystems/keyring/keyring.o 00:02:26.415 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:26.415 LIB libspdk_event_scheduler.a 00:02:26.415 LIB libspdk_event_keyring.a 00:02:26.415 LIB libspdk_event_vmd.a 00:02:26.415 LIB libspdk_event_vhost_blk.a 00:02:26.415 LIB libspdk_event_sock.a 00:02:26.415 LIB libspdk_event_iobuf.a 00:02:26.415 SO libspdk_event_scheduler.so.4.0 00:02:26.415 SO libspdk_event_keyring.so.1.0 00:02:26.415 SO libspdk_event_vmd.so.6.0 00:02:26.415 SO libspdk_event_vhost_blk.so.3.0 00:02:26.675 SO libspdk_event_sock.so.5.0 00:02:26.675 SO libspdk_event_iobuf.so.3.0 00:02:26.675 SYMLINK libspdk_event_keyring.so 00:02:26.675 SYMLINK libspdk_event_scheduler.so 00:02:26.675 SYMLINK libspdk_event_vmd.so 00:02:26.675 SYMLINK libspdk_event_vhost_blk.so 00:02:26.675 SYMLINK libspdk_event_sock.so 00:02:26.675 SYMLINK libspdk_event_iobuf.so 00:02:26.934 CC module/event/subsystems/accel/accel.o 00:02:27.193 LIB libspdk_event_accel.a 00:02:27.193 SO libspdk_event_accel.so.6.0 00:02:27.193 SYMLINK libspdk_event_accel.so 00:02:27.764 CC module/event/subsystems/bdev/bdev.o 00:02:27.764 LIB libspdk_event_bdev.a 00:02:27.764 SO libspdk_event_bdev.so.6.0 00:02:28.022 SYMLINK libspdk_event_bdev.so 00:02:28.281 CC module/event/subsystems/ublk/ublk.o 00:02:28.281 CC module/event/subsystems/nbd/nbd.o 00:02:28.281 CC module/event/subsystems/scsi/scsi.o 00:02:28.281 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:28.281 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:28.539 LIB libspdk_event_ublk.a 00:02:28.539 LIB libspdk_event_nbd.a 00:02:28.539 LIB libspdk_event_scsi.a 00:02:28.539 SO libspdk_event_ublk.so.3.0 00:02:28.539 SO libspdk_event_nbd.so.6.0 00:02:28.539 SO libspdk_event_scsi.so.6.0 00:02:28.539 LIB libspdk_event_nvmf.a 00:02:28.539 SYMLINK libspdk_event_nbd.so 00:02:28.539 SO libspdk_event_nvmf.so.6.0 00:02:28.539 SYMLINK libspdk_event_scsi.so 00:02:28.539 SYMLINK libspdk_event_ublk.so 00:02:28.798 SYMLINK libspdk_event_nvmf.so 00:02:29.057 CC module/event/subsystems/iscsi/iscsi.o 00:02:29.057 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:29.057 LIB libspdk_event_vhost_scsi.a 00:02:29.057 LIB libspdk_event_iscsi.a 00:02:29.316 SO libspdk_event_vhost_scsi.so.3.0 00:02:29.316 SO libspdk_event_iscsi.so.6.0 00:02:29.316 SYMLINK libspdk_event_vhost_scsi.so 00:02:29.316 SYMLINK libspdk_event_iscsi.so 00:02:29.576 SO libspdk.so.6.0 00:02:29.576 SYMLINK libspdk.so 00:02:29.835 CC app/spdk_nvme_identify/identify.o 00:02:29.835 TEST_HEADER include/spdk/accel.h 00:02:29.835 TEST_HEADER include/spdk/accel_module.h 00:02:29.835 CC app/spdk_nvme_perf/perf.o 00:02:29.835 TEST_HEADER include/spdk/assert.h 00:02:29.835 TEST_HEADER include/spdk/barrier.h 00:02:29.835 TEST_HEADER include/spdk/base64.h 00:02:29.835 TEST_HEADER include/spdk/bdev.h 00:02:29.835 CC app/trace_record/trace_record.o 00:02:29.835 TEST_HEADER include/spdk/bdev_module.h 00:02:29.835 CC app/spdk_nvme_discover/discovery_aer.o 00:02:29.835 TEST_HEADER include/spdk/bit_pool.h 00:02:29.835 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.835 CC test/rpc_client/rpc_client_test.o 00:02:29.835 TEST_HEADER include/spdk/bit_array.h 00:02:29.835 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.835 TEST_HEADER include/spdk/blob.h 00:02:29.835 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.835 TEST_HEADER include/spdk/conf.h 00:02:29.835 TEST_HEADER include/spdk/blobfs.h 00:02:29.835 TEST_HEADER include/spdk/config.h 00:02:29.835 CC app/spdk_lspci/spdk_lspci.o 00:02:29.835 TEST_HEADER include/spdk/cpuset.h 00:02:29.835 TEST_HEADER include/spdk/crc32.h 00:02:29.835 TEST_HEADER include/spdk/crc16.h 00:02:29.835 TEST_HEADER include/spdk/crc64.h 00:02:29.835 TEST_HEADER include/spdk/dif.h 00:02:29.835 TEST_HEADER include/spdk/dma.h 00:02:29.835 TEST_HEADER include/spdk/env.h 00:02:29.835 TEST_HEADER include/spdk/endian.h 00:02:29.835 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.835 CC app/spdk_top/spdk_top.o 00:02:29.835 TEST_HEADER include/spdk/event.h 00:02:29.835 CXX app/trace/trace.o 00:02:29.835 TEST_HEADER include/spdk/fd_group.h 00:02:29.835 TEST_HEADER include/spdk/fd.h 00:02:29.835 TEST_HEADER include/spdk/file.h 00:02:29.835 TEST_HEADER include/spdk/ftl.h 00:02:29.835 TEST_HEADER include/spdk/hexlify.h 00:02:29.835 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.835 TEST_HEADER include/spdk/histogram_data.h 00:02:29.835 TEST_HEADER include/spdk/idxd.h 00:02:29.835 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.835 TEST_HEADER include/spdk/ioat.h 00:02:29.835 TEST_HEADER include/spdk/init.h 00:02:29.835 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.835 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.835 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.835 TEST_HEADER include/spdk/json.h 00:02:29.835 TEST_HEADER include/spdk/likely.h 00:02:29.835 TEST_HEADER include/spdk/keyring.h 00:02:29.835 TEST_HEADER include/spdk/log.h 00:02:29.835 TEST_HEADER include/spdk/keyring_module.h 00:02:29.835 TEST_HEADER include/spdk/lvol.h 00:02:29.835 TEST_HEADER include/spdk/memory.h 00:02:29.835 TEST_HEADER include/spdk/mmio.h 00:02:29.835 TEST_HEADER include/spdk/nbd.h 00:02:29.835 TEST_HEADER include/spdk/net.h 00:02:29.835 TEST_HEADER include/spdk/notify.h 00:02:29.835 TEST_HEADER include/spdk/nvme.h 00:02:29.835 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.835 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.835 CC app/spdk_dd/spdk_dd.o 00:02:29.835 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.835 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.835 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.835 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.835 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.835 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.835 TEST_HEADER include/spdk/nvmf.h 00:02:29.835 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.835 TEST_HEADER include/spdk/opal_spec.h 00:02:29.835 TEST_HEADER include/spdk/pci_ids.h 00:02:29.835 TEST_HEADER include/spdk/pipe.h 00:02:29.835 TEST_HEADER include/spdk/opal.h 00:02:29.835 TEST_HEADER include/spdk/queue.h 00:02:29.835 TEST_HEADER include/spdk/rpc.h 00:02:29.835 TEST_HEADER include/spdk/scheduler.h 00:02:29.835 TEST_HEADER include/spdk/reduce.h 00:02:29.835 TEST_HEADER include/spdk/scsi.h 00:02:29.835 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.835 TEST_HEADER include/spdk/sock.h 00:02:29.835 TEST_HEADER include/spdk/string.h 00:02:29.835 TEST_HEADER include/spdk/stdinc.h 00:02:29.835 TEST_HEADER include/spdk/thread.h 00:02:29.835 CC app/iscsi_tgt/iscsi_tgt.o 00:02:29.835 TEST_HEADER include/spdk/trace.h 00:02:29.835 TEST_HEADER include/spdk/tree.h 00:02:29.835 TEST_HEADER include/spdk/trace_parser.h 00:02:29.835 TEST_HEADER include/spdk/util.h 00:02:29.835 TEST_HEADER include/spdk/ublk.h 00:02:29.835 TEST_HEADER include/spdk/version.h 00:02:29.835 TEST_HEADER include/spdk/uuid.h 00:02:29.835 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.835 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.835 TEST_HEADER include/spdk/vhost.h 00:02:29.835 TEST_HEADER include/spdk/vmd.h 00:02:29.835 TEST_HEADER include/spdk/xor.h 00:02:29.835 TEST_HEADER include/spdk/zipf.h 00:02:29.835 CC app/nvmf_tgt/nvmf_main.o 00:02:29.835 CXX test/cpp_headers/accel.o 00:02:29.835 CXX test/cpp_headers/accel_module.o 00:02:29.835 CXX test/cpp_headers/assert.o 00:02:29.835 CXX test/cpp_headers/base64.o 00:02:29.835 CXX test/cpp_headers/barrier.o 00:02:29.835 CXX test/cpp_headers/bdev_module.o 00:02:29.835 CXX test/cpp_headers/bdev.o 00:02:29.835 CXX test/cpp_headers/bit_array.o 00:02:29.835 CXX test/cpp_headers/bdev_zone.o 00:02:29.835 CXX test/cpp_headers/blob_bdev.o 00:02:30.100 CXX test/cpp_headers/bit_pool.o 00:02:30.100 CXX test/cpp_headers/blobfs.o 00:02:30.100 CXX test/cpp_headers/blob.o 00:02:30.100 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.100 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:30.100 CXX test/cpp_headers/cpuset.o 00:02:30.100 CXX test/cpp_headers/conf.o 00:02:30.100 CXX test/cpp_headers/config.o 00:02:30.100 CXX test/cpp_headers/crc16.o 00:02:30.100 CXX test/cpp_headers/crc32.o 00:02:30.100 CXX test/cpp_headers/crc64.o 00:02:30.100 CXX test/cpp_headers/dma.o 00:02:30.100 CXX test/cpp_headers/dif.o 00:02:30.100 CXX test/cpp_headers/endian.o 00:02:30.100 CXX test/cpp_headers/env_dpdk.o 00:02:30.100 CXX test/cpp_headers/event.o 00:02:30.100 CXX test/cpp_headers/fd_group.o 00:02:30.100 CXX test/cpp_headers/env.o 00:02:30.100 CXX test/cpp_headers/fd.o 00:02:30.100 CXX test/cpp_headers/file.o 00:02:30.100 CC app/spdk_tgt/spdk_tgt.o 00:02:30.100 CXX test/cpp_headers/ftl.o 00:02:30.100 CXX test/cpp_headers/gpt_spec.o 00:02:30.100 CXX test/cpp_headers/idxd.o 00:02:30.100 CXX test/cpp_headers/hexlify.o 00:02:30.100 CXX test/cpp_headers/init.o 00:02:30.100 CXX test/cpp_headers/histogram_data.o 00:02:30.100 CXX test/cpp_headers/idxd_spec.o 00:02:30.100 CXX test/cpp_headers/ioat_spec.o 00:02:30.100 CXX test/cpp_headers/json.o 00:02:30.100 CXX test/cpp_headers/ioat.o 00:02:30.100 CXX test/cpp_headers/keyring.o 00:02:30.100 CXX test/cpp_headers/jsonrpc.o 00:02:30.100 CXX test/cpp_headers/keyring_module.o 00:02:30.100 CXX test/cpp_headers/iscsi_spec.o 00:02:30.100 CXX test/cpp_headers/likely.o 00:02:30.100 CXX test/cpp_headers/log.o 00:02:30.100 CXX test/cpp_headers/lvol.o 00:02:30.100 CXX test/cpp_headers/mmio.o 00:02:30.100 CXX test/cpp_headers/memory.o 00:02:30.100 CXX test/cpp_headers/nbd.o 00:02:30.100 CXX test/cpp_headers/notify.o 00:02:30.100 CXX test/cpp_headers/net.o 00:02:30.100 CXX test/cpp_headers/nvme.o 00:02:30.100 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.100 CXX test/cpp_headers/nvme_intel.o 00:02:30.100 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.100 CXX test/cpp_headers/nvme_zns.o 00:02:30.100 CXX test/cpp_headers/nvme_spec.o 00:02:30.100 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.100 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.100 CXX test/cpp_headers/nvmf.o 00:02:30.100 CXX test/cpp_headers/nvmf_spec.o 00:02:30.100 CXX test/cpp_headers/nvmf_transport.o 00:02:30.100 CXX test/cpp_headers/opal.o 00:02:30.100 CXX test/cpp_headers/opal_spec.o 00:02:30.100 CXX test/cpp_headers/pci_ids.o 00:02:30.100 CXX test/cpp_headers/pipe.o 00:02:30.100 CXX test/cpp_headers/queue.o 00:02:30.100 CXX test/cpp_headers/reduce.o 00:02:30.100 CXX test/cpp_headers/rpc.o 00:02:30.100 CXX test/cpp_headers/scheduler.o 00:02:30.100 CXX test/cpp_headers/scsi.o 00:02:30.100 CXX test/cpp_headers/scsi_spec.o 00:02:30.100 CXX test/cpp_headers/sock.o 00:02:30.100 CXX test/cpp_headers/stdinc.o 00:02:30.100 CXX test/cpp_headers/string.o 00:02:30.100 CXX test/cpp_headers/thread.o 00:02:30.100 CXX test/cpp_headers/trace.o 00:02:30.100 CXX test/cpp_headers/tree.o 00:02:30.100 CXX test/cpp_headers/trace_parser.o 00:02:30.100 CXX test/cpp_headers/ublk.o 00:02:30.100 CXX test/cpp_headers/util.o 00:02:30.100 CXX test/cpp_headers/uuid.o 00:02:30.100 CXX test/cpp_headers/version.o 00:02:30.100 CXX test/cpp_headers/vfio_user_pci.o 00:02:30.100 CC test/app/histogram_perf/histogram_perf.o 00:02:30.100 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.100 CC examples/util/zipf/zipf.o 00:02:30.100 CC test/env/memory/memory_ut.o 00:02:30.100 CC test/app/jsoncat/jsoncat.o 00:02:30.100 CXX test/cpp_headers/vfio_user_spec.o 00:02:30.100 CXX test/cpp_headers/vhost.o 00:02:30.100 CC test/env/pci/pci_ut.o 00:02:30.100 CC test/app/stub/stub.o 00:02:30.100 CC examples/ioat/verify/verify.o 00:02:30.391 CC test/env/vtophys/vtophys.o 00:02:30.391 CC app/fio/nvme/fio_plugin.o 00:02:30.391 CC test/thread/poller_perf/poller_perf.o 00:02:30.391 CC test/app/bdev_svc/bdev_svc.o 00:02:30.391 CC examples/ioat/perf/perf.o 00:02:30.391 CC test/dma/test_dma/test_dma.o 00:02:30.391 CC app/fio/bdev/fio_plugin.o 00:02:30.391 LINK spdk_lspci 00:02:30.662 LINK rpc_client_test 00:02:30.662 LINK spdk_nvme_discover 00:02:30.921 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:30.921 CC test/env/mem_callbacks/mem_callbacks.o 00:02:30.921 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:30.921 LINK interrupt_tgt 00:02:30.921 CXX test/cpp_headers/vmd.o 00:02:30.921 CXX test/cpp_headers/xor.o 00:02:30.921 LINK nvmf_tgt 00:02:30.921 LINK histogram_perf 00:02:30.921 LINK spdk_trace_record 00:02:30.921 CXX test/cpp_headers/zipf.o 00:02:30.921 LINK jsoncat 00:02:30.921 LINK vtophys 00:02:30.921 LINK iscsi_tgt 00:02:30.921 LINK env_dpdk_post_init 00:02:30.921 LINK zipf 00:02:30.921 LINK spdk_tgt 00:02:30.921 LINK poller_perf 00:02:30.921 LINK stub 00:02:30.921 LINK bdev_svc 00:02:30.921 LINK verify 00:02:30.921 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:30.921 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:30.921 LINK spdk_dd 00:02:31.180 LINK spdk_trace 00:02:31.180 LINK pci_ut 00:02:31.180 LINK test_dma 00:02:31.180 LINK spdk_nvme 00:02:31.439 LINK spdk_bdev 00:02:31.439 LINK nvme_fuzz 00:02:31.439 LINK ioat_perf 00:02:31.439 LINK spdk_nvme_identify 00:02:31.439 LINK spdk_nvme_perf 00:02:31.439 CC test/event/event_perf/event_perf.o 00:02:31.439 CC test/event/reactor/reactor.o 00:02:31.439 LINK vhost_fuzz 00:02:31.439 CC test/event/reactor_perf/reactor_perf.o 00:02:31.439 CC examples/sock/hello_world/hello_sock.o 00:02:31.439 CC examples/idxd/perf/perf.o 00:02:31.439 CC test/event/app_repeat/app_repeat.o 00:02:31.439 CC examples/vmd/led/led.o 00:02:31.439 CC test/event/scheduler/scheduler.o 00:02:31.439 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.439 LINK mem_callbacks 00:02:31.439 CC app/vhost/vhost.o 00:02:31.439 CC examples/thread/thread/thread_ex.o 00:02:31.439 LINK spdk_top 00:02:31.696 LINK reactor 00:02:31.696 LINK event_perf 00:02:31.696 LINK reactor_perf 00:02:31.696 LINK app_repeat 00:02:31.696 LINK lsvmd 00:02:31.696 LINK led 00:02:31.696 LINK vhost 00:02:31.696 LINK hello_sock 00:02:31.697 LINK idxd_perf 00:02:31.697 LINK scheduler 00:02:31.697 CC test/nvme/e2edp/nvme_dp.o 00:02:31.697 CC test/nvme/aer/aer.o 00:02:31.955 CC test/nvme/reset/reset.o 00:02:31.955 CC test/nvme/cuse/cuse.o 00:02:31.955 CC test/nvme/startup/startup.o 00:02:31.955 CC test/nvme/overhead/overhead.o 00:02:31.955 LINK thread 00:02:31.955 CC test/nvme/reserve/reserve.o 00:02:31.955 CC test/blobfs/mkfs/mkfs.o 00:02:31.955 CC test/nvme/simple_copy/simple_copy.o 00:02:31.955 CC test/nvme/boot_partition/boot_partition.o 00:02:31.955 CC test/nvme/err_injection/err_injection.o 00:02:31.955 CC test/nvme/sgl/sgl.o 00:02:31.955 CC test/nvme/fdp/fdp.o 00:02:31.955 CC test/nvme/fused_ordering/fused_ordering.o 00:02:31.955 CC test/nvme/connect_stress/connect_stress.o 00:02:31.955 CC test/nvme/compliance/nvme_compliance.o 00:02:31.955 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:31.955 CC test/accel/dif/dif.o 00:02:31.955 LINK memory_ut 00:02:31.955 CC test/lvol/esnap/esnap.o 00:02:31.955 LINK boot_partition 00:02:31.955 LINK startup 00:02:31.955 LINK err_injection 00:02:31.955 LINK reserve 00:02:31.955 LINK fused_ordering 00:02:32.214 LINK connect_stress 00:02:32.214 LINK mkfs 00:02:32.214 LINK doorbell_aers 00:02:32.214 LINK reset 00:02:32.214 LINK simple_copy 00:02:32.214 LINK nvme_dp 00:02:32.214 LINK nvme_compliance 00:02:32.214 LINK aer 00:02:32.214 LINK sgl 00:02:32.214 LINK overhead 00:02:32.214 LINK fdp 00:02:32.214 LINK iscsi_fuzz 00:02:32.214 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.214 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.214 CC examples/nvme/reconnect/reconnect.o 00:02:32.214 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.214 CC examples/nvme/hello_world/hello_world.o 00:02:32.214 CC examples/nvme/abort/abort.o 00:02:32.214 CC examples/nvme/hotplug/hotplug.o 00:02:32.214 CC examples/nvme/arbitration/arbitration.o 00:02:32.214 LINK dif 00:02:32.472 CC examples/accel/perf/accel_perf.o 00:02:32.472 CC examples/blob/cli/blobcli.o 00:02:32.472 LINK pmr_persistence 00:02:32.472 CC examples/blob/hello_world/hello_blob.o 00:02:32.472 LINK cmb_copy 00:02:32.472 LINK hello_world 00:02:32.472 LINK hotplug 00:02:32.732 LINK reconnect 00:02:32.732 LINK arbitration 00:02:32.732 LINK abort 00:02:32.732 LINK nvme_manage 00:02:32.732 LINK hello_blob 00:02:32.991 LINK accel_perf 00:02:32.991 CC test/bdev/bdevio/bdevio.o 00:02:32.991 LINK blobcli 00:02:32.991 LINK cuse 00:02:33.559 LINK bdevio 00:02:33.559 CC examples/bdev/hello_world/hello_bdev.o 00:02:33.559 CC examples/bdev/bdevperf/bdevperf.o 00:02:33.819 LINK hello_bdev 00:02:34.387 LINK bdevperf 00:02:34.955 CC examples/nvmf/nvmf/nvmf.o 00:02:35.214 LINK nvmf 00:02:36.656 LINK esnap 00:02:36.915 00:02:36.915 real 1m27.548s 00:02:36.915 user 15m27.344s 00:02:36.915 sys 5m34.755s 00:02:36.915 11:44:23 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:36.915 11:44:23 make -- common/autotest_common.sh@10 -- $ set +x 00:02:36.915 ************************************ 00:02:36.915 END TEST make 00:02:36.915 ************************************ 00:02:37.174 11:44:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:37.174 11:44:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:37.174 11:44:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:37.174 11:44:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:37.174 11:44:23 -- pm/common@44 -- $ pid=3908306 00:02:37.174 11:44:23 -- pm/common@50 -- $ kill -TERM 3908306 00:02:37.174 11:44:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:37.174 11:44:23 -- pm/common@44 -- $ pid=3908308 00:02:37.174 11:44:23 -- pm/common@50 -- $ kill -TERM 3908308 00:02:37.174 11:44:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:37.174 11:44:23 -- pm/common@44 -- $ pid=3908310 00:02:37.174 11:44:23 -- pm/common@50 -- $ kill -TERM 3908310 00:02:37.174 11:44:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:37.174 11:44:23 -- pm/common@44 -- $ pid=3908332 00:02:37.174 11:44:23 -- pm/common@50 -- $ sudo -E kill -TERM 3908332 00:02:37.174 11:44:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:37.174 11:44:23 -- nvmf/common.sh@7 -- # uname -s 00:02:37.174 11:44:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:37.174 11:44:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:37.174 11:44:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:37.174 11:44:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:37.174 11:44:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:37.174 11:44:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:37.174 11:44:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:37.174 11:44:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:37.174 11:44:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:37.174 11:44:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:37.174 11:44:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:37.174 11:44:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:37.174 11:44:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:37.174 11:44:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:37.174 11:44:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:37.174 11:44:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:37.174 11:44:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:37.174 11:44:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:37.174 11:44:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.174 11:44:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.174 11:44:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.174 11:44:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.174 11:44:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.174 11:44:23 -- paths/export.sh@5 -- # export PATH 00:02:37.174 11:44:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.174 11:44:23 -- nvmf/common.sh@47 -- # : 0 00:02:37.174 11:44:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:37.174 11:44:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:37.174 11:44:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:37.174 11:44:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:37.174 11:44:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:37.174 11:44:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:37.174 11:44:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:37.174 11:44:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:37.174 11:44:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:37.174 11:44:23 -- spdk/autotest.sh@32 -- # uname -s 00:02:37.174 11:44:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:37.174 11:44:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:37.174 11:44:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:37.174 11:44:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:37.174 11:44:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:37.174 11:44:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:37.174 11:44:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.174 11:44:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:37.174 11:44:23 -- spdk/autotest.sh@48 -- # udevadm_pid=3979037 00:02:37.174 11:44:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:37.174 11:44:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:37.174 11:44:23 -- pm/common@17 -- # local monitor 00:02:37.174 11:44:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@21 -- # date +%s 00:02:37.174 11:44:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.174 11:44:23 -- pm/common@21 -- # date +%s 00:02:37.174 11:44:23 -- pm/common@25 -- # sleep 1 00:02:37.174 11:44:23 -- pm/common@21 -- # date +%s 00:02:37.174 11:44:23 -- pm/common@21 -- # date +%s 00:02:37.174 11:44:23 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900663 00:02:37.174 11:44:23 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900663 00:02:37.174 11:44:23 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900663 00:02:37.174 11:44:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900663 00:02:37.433 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900663_collect-vmstat.pm.log 00:02:37.433 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900663_collect-cpu-load.pm.log 00:02:37.433 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900663_collect-cpu-temp.pm.log 00:02:37.433 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900663_collect-bmc-pm.bmc.pm.log 00:02:38.371 11:44:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.371 11:44:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.371 11:44:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:38.371 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:02:38.371 11:44:24 -- spdk/autotest.sh@59 -- # create_test_list 00:02:38.371 11:44:24 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:38.371 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:02:38.371 11:44:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:38.371 11:44:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:38.371 11:44:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:38.371 11:44:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:38.371 11:44:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:38.371 11:44:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:38.371 11:44:24 -- common/autotest_common.sh@1455 -- # uname 00:02:38.371 11:44:24 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:38.371 11:44:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:38.371 11:44:24 -- common/autotest_common.sh@1475 -- # uname 00:02:38.371 11:44:24 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:38.371 11:44:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:38.371 11:44:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:38.371 11:44:24 -- spdk/autotest.sh@72 -- # hash lcov 00:02:38.371 11:44:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:38.371 11:44:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:38.371 --rc lcov_branch_coverage=1 00:02:38.371 --rc lcov_function_coverage=1 00:02:38.371 --rc genhtml_branch_coverage=1 00:02:38.371 --rc genhtml_function_coverage=1 00:02:38.371 --rc genhtml_legend=1 00:02:38.371 --rc geninfo_all_blocks=1 00:02:38.371 ' 00:02:38.371 11:44:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:38.371 --rc lcov_branch_coverage=1 00:02:38.371 --rc lcov_function_coverage=1 00:02:38.371 --rc genhtml_branch_coverage=1 00:02:38.371 --rc genhtml_function_coverage=1 00:02:38.371 --rc genhtml_legend=1 00:02:38.371 --rc geninfo_all_blocks=1 00:02:38.371 ' 00:02:38.371 11:44:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:38.371 --rc lcov_branch_coverage=1 00:02:38.371 --rc lcov_function_coverage=1 00:02:38.371 --rc genhtml_branch_coverage=1 00:02:38.371 --rc genhtml_function_coverage=1 00:02:38.371 --rc genhtml_legend=1 00:02:38.371 --rc geninfo_all_blocks=1 00:02:38.371 --no-external' 00:02:38.371 11:44:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:38.371 --rc lcov_branch_coverage=1 00:02:38.371 --rc lcov_function_coverage=1 00:02:38.371 --rc genhtml_branch_coverage=1 00:02:38.371 --rc genhtml_function_coverage=1 00:02:38.371 --rc genhtml_legend=1 00:02:38.371 --rc geninfo_all_blocks=1 00:02:38.371 --no-external' 00:02:38.371 11:44:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:38.371 lcov: LCOV version 1.14 00:02:38.371 11:44:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:53.249 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:53.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:08.149 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:08.149 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:08.150 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:08.150 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:08.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:08.151 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:11.433 11:44:57 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.433 11:44:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:11.433 11:44:57 -- common/autotest_common.sh@10 -- # set +x 00:03:11.433 11:44:57 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.433 11:44:57 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.685 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:15.685 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:15.685 11:45:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:15.685 11:45:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:15.685 11:45:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:15.686 11:45:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:15.686 11:45:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:15.686 11:45:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:15.686 11:45:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:15.686 11:45:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.686 11:45:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:15.686 11:45:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:15.686 11:45:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:15.686 11:45:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:15.686 11:45:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:15.686 11:45:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:15.686 11:45:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:15.686 No valid GPT data, bailing 00:03:15.686 11:45:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:15.686 11:45:01 -- scripts/common.sh@391 -- # pt= 00:03:15.686 11:45:01 -- scripts/common.sh@392 -- # return 1 00:03:15.686 11:45:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:15.686 1+0 records in 00:03:15.686 1+0 records out 00:03:15.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537269 s, 195 MB/s 00:03:15.686 11:45:01 -- spdk/autotest.sh@118 -- # sync 00:03:15.686 11:45:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:15.686 11:45:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:15.686 11:45:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:23.809 11:45:08 -- spdk/autotest.sh@124 -- # uname -s 00:03:23.809 11:45:08 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:23.809 11:45:08 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:23.809 11:45:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.809 11:45:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.809 11:45:08 -- common/autotest_common.sh@10 -- # set +x 00:03:23.809 ************************************ 00:03:23.809 START TEST setup.sh 00:03:23.809 ************************************ 00:03:23.809 11:45:08 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:23.809 * Looking for test storage... 00:03:23.809 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:23.809 11:45:08 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:23.809 11:45:08 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:23.809 11:45:08 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:23.809 11:45:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.809 11:45:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.809 11:45:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:23.809 ************************************ 00:03:23.809 START TEST acl 00:03:23.809 ************************************ 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:23.809 * Looking for test storage... 00:03:23.809 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:23.809 11:45:08 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.809 11:45:08 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.810 11:45:08 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:23.810 11:45:08 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:23.810 11:45:08 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:23.810 11:45:08 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:23.810 11:45:08 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:23.810 11:45:08 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.810 11:45:08 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.001 11:45:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.001 11:45:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.001 11:45:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.001 11:45:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.001 11:45:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.001 11:45:13 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:31.291 Hugepages 00:03:31.291 node hugesize free / total 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 00:03:31.549 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.549 11:45:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.807 11:45:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:31.807 11:45:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.807 11:45:17 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.807 11:45:17 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.807 11:45:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.807 ************************************ 00:03:31.807 START TEST denied 00:03:31.807 ************************************ 00:03:31.807 11:45:17 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:31.807 11:45:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:31.807 11:45:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:31.807 11:45:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:31.807 11:45:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.807 11:45:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:37.076 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.076 11:45:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.346 00:03:42.346 real 0m9.905s 00:03:42.346 user 0m3.176s 00:03:42.346 sys 0m6.056s 00:03:42.346 11:45:27 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.346 11:45:27 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:42.346 ************************************ 00:03:42.346 END TEST denied 00:03:42.346 ************************************ 00:03:42.346 11:45:27 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:42.346 11:45:27 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.346 11:45:27 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.346 11:45:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.346 ************************************ 00:03:42.346 START TEST allowed 00:03:42.346 ************************************ 00:03:42.346 11:45:27 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:42.346 11:45:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:42.346 11:45:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:42.346 11:45:27 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:42.346 11:45:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.346 11:45:27 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:48.945 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.945 11:45:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:48.945 11:45:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:48.945 11:45:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:48.945 11:45:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.945 11:45:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.233 00:03:52.233 real 0m10.555s 00:03:52.233 user 0m2.845s 00:03:52.233 sys 0m5.817s 00:03:52.233 11:45:38 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.233 11:45:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:52.233 ************************************ 00:03:52.233 END TEST allowed 00:03:52.233 ************************************ 00:03:52.233 00:03:52.233 real 0m29.640s 00:03:52.233 user 0m9.260s 00:03:52.233 sys 0m18.132s 00:03:52.233 11:45:38 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.233 11:45:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.233 ************************************ 00:03:52.233 END TEST acl 00:03:52.233 ************************************ 00:03:52.233 11:45:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:52.233 11:45:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.233 11:45:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.233 11:45:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.494 ************************************ 00:03:52.494 START TEST hugepages 00:03:52.494 ************************************ 00:03:52.494 11:45:38 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:52.494 * Looking for test storage... 00:03:52.494 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41397924 kB' 'MemAvailable: 45386944 kB' 'Buffers: 6064 kB' 'Cached: 10543916 kB' 'SwapCached: 0 kB' 'Active: 7371112 kB' 'Inactive: 3689560 kB' 'Active(anon): 6972688 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513904 kB' 'Mapped: 172340 kB' 'Shmem: 6461996 kB' 'KReclaimable: 548744 kB' 'Slab: 1200228 kB' 'SReclaimable: 548744 kB' 'SUnreclaim: 651484 kB' 'KernelStack: 22304 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8449056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.494 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.495 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.496 11:45:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:52.496 11:45:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.496 11:45:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.496 11:45:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.496 ************************************ 00:03:52.496 START TEST default_setup 00:03:52.496 ************************************ 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.496 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.497 11:45:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:56.692 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.692 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.602 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43537220 kB' 'MemAvailable: 47525888 kB' 'Buffers: 6064 kB' 'Cached: 10544064 kB' 'SwapCached: 0 kB' 'Active: 7388108 kB' 'Inactive: 3689560 kB' 'Active(anon): 6989684 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530472 kB' 'Mapped: 172664 kB' 'Shmem: 6462144 kB' 'KReclaimable: 548392 kB' 'Slab: 1198156 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649764 kB' 'KernelStack: 22208 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8469208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.602 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43536384 kB' 'MemAvailable: 47525052 kB' 'Buffers: 6064 kB' 'Cached: 10544064 kB' 'SwapCached: 0 kB' 'Active: 7388720 kB' 'Inactive: 3689560 kB' 'Active(anon): 6990296 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531120 kB' 'Mapped: 172656 kB' 'Shmem: 6462144 kB' 'KReclaimable: 548392 kB' 'Slab: 1198084 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649692 kB' 'KernelStack: 22288 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8469360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.604 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43536000 kB' 'MemAvailable: 47524668 kB' 'Buffers: 6064 kB' 'Cached: 10544096 kB' 'SwapCached: 0 kB' 'Active: 7389748 kB' 'Inactive: 3689560 kB' 'Active(anon): 6991324 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532524 kB' 'Mapped: 173080 kB' 'Shmem: 6462176 kB' 'KReclaimable: 548392 kB' 'Slab: 1198076 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649684 kB' 'KernelStack: 22160 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8472296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.608 nr_hugepages=1024 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.608 resv_hugepages=0 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.608 surplus_hugepages=0 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.608 anon_hugepages=0 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43527512 kB' 'MemAvailable: 47516180 kB' 'Buffers: 6064 kB' 'Cached: 10544112 kB' 'SwapCached: 0 kB' 'Active: 7393844 kB' 'Inactive: 3689560 kB' 'Active(anon): 6995420 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536660 kB' 'Mapped: 173080 kB' 'Shmem: 6462192 kB' 'KReclaimable: 548392 kB' 'Slab: 1198076 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649684 kB' 'KernelStack: 22352 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8475892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218864 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.609 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.610 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25368808 kB' 'MemUsed: 7270332 kB' 'SwapCached: 0 kB' 'Active: 2858608 kB' 'Inactive: 231284 kB' 'Active(anon): 2725560 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710160 kB' 'Mapped: 76788 kB' 'AnonPages: 382960 kB' 'Shmem: 2345828 kB' 'KernelStack: 12840 kB' 'PageTables: 6244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 533900 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 314180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.611 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.612 node0=1024 expecting 1024 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.612 00:03:58.612 real 0m6.064s 00:03:58.612 user 0m1.402s 00:03:58.612 sys 0m2.693s 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.612 11:45:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:58.612 ************************************ 00:03:58.612 END TEST default_setup 00:03:58.612 ************************************ 00:03:58.612 11:45:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:58.612 11:45:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.612 11:45:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.612 11:45:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.872 ************************************ 00:03:58.872 START TEST per_node_1G_alloc 00:03:58.872 ************************************ 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.872 11:45:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:03.069 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.069 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43563488 kB' 'MemAvailable: 47552156 kB' 'Buffers: 6064 kB' 'Cached: 10544224 kB' 'SwapCached: 0 kB' 'Active: 7386136 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987712 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528112 kB' 'Mapped: 171900 kB' 'Shmem: 6462304 kB' 'KReclaimable: 548392 kB' 'Slab: 1197784 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649392 kB' 'KernelStack: 22080 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8454276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43564052 kB' 'MemAvailable: 47552720 kB' 'Buffers: 6064 kB' 'Cached: 10544224 kB' 'SwapCached: 0 kB' 'Active: 7385524 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987100 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527980 kB' 'Mapped: 171400 kB' 'Shmem: 6462304 kB' 'KReclaimable: 548392 kB' 'Slab: 1197744 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649352 kB' 'KernelStack: 22032 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8454292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43564304 kB' 'MemAvailable: 47552972 kB' 'Buffers: 6064 kB' 'Cached: 10544244 kB' 'SwapCached: 0 kB' 'Active: 7385844 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987420 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528288 kB' 'Mapped: 171400 kB' 'Shmem: 6462324 kB' 'KReclaimable: 548392 kB' 'Slab: 1197744 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649352 kB' 'KernelStack: 22048 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8454316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.074 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.075 nr_hugepages=1024 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.075 resv_hugepages=0 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.075 surplus_hugepages=0 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.075 anon_hugepages=0 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43563232 kB' 'MemAvailable: 47551900 kB' 'Buffers: 6064 kB' 'Cached: 10544268 kB' 'SwapCached: 0 kB' 'Active: 7386032 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987608 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528536 kB' 'Mapped: 171400 kB' 'Shmem: 6462348 kB' 'KReclaimable: 548392 kB' 'Slab: 1197744 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649352 kB' 'KernelStack: 22096 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8453972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.075 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.076 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26422924 kB' 'MemUsed: 6216216 kB' 'SwapCached: 0 kB' 'Active: 2857088 kB' 'Inactive: 231284 kB' 'Active(anon): 2724040 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710232 kB' 'Mapped: 76008 kB' 'AnonPages: 381328 kB' 'Shmem: 2345900 kB' 'KernelStack: 12440 kB' 'PageTables: 5440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 533692 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 313972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.077 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.078 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 17141640 kB' 'MemUsed: 10514440 kB' 'SwapCached: 0 kB' 'Active: 4529696 kB' 'Inactive: 3458276 kB' 'Active(anon): 4264320 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7840100 kB' 'Mapped: 95416 kB' 'AnonPages: 147952 kB' 'Shmem: 4116448 kB' 'KernelStack: 9576 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328672 kB' 'Slab: 664052 kB' 'SReclaimable: 328672 kB' 'SUnreclaim: 335380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.079 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.080 node0=512 expecting 512 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:03.080 node1=512 expecting 512 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.080 00:04:03.080 real 0m4.367s 00:04:03.080 user 0m1.650s 00:04:03.080 sys 0m2.798s 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.080 11:45:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.080 ************************************ 00:04:03.080 END TEST per_node_1G_alloc 00:04:03.080 ************************************ 00:04:03.080 11:45:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.080 11:45:49 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.080 11:45:49 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.080 11:45:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.339 ************************************ 00:04:03.339 START TEST even_2G_alloc 00:04:03.339 ************************************ 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.339 11:45:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:06.622 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.622 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.885 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43558868 kB' 'MemAvailable: 47547536 kB' 'Buffers: 6064 kB' 'Cached: 10544416 kB' 'SwapCached: 0 kB' 'Active: 7386644 kB' 'Inactive: 3689560 kB' 'Active(anon): 6988220 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528628 kB' 'Mapped: 171560 kB' 'Shmem: 6462496 kB' 'KReclaimable: 548392 kB' 'Slab: 1197548 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649156 kB' 'KernelStack: 22304 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8458140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218988 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43561968 kB' 'MemAvailable: 47550636 kB' 'Buffers: 6064 kB' 'Cached: 10544420 kB' 'SwapCached: 0 kB' 'Active: 7386788 kB' 'Inactive: 3689560 kB' 'Active(anon): 6988364 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528752 kB' 'Mapped: 171512 kB' 'Shmem: 6462500 kB' 'KReclaimable: 548392 kB' 'Slab: 1197548 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649156 kB' 'KernelStack: 22240 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8455292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.886 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43563420 kB' 'MemAvailable: 47552088 kB' 'Buffers: 6064 kB' 'Cached: 10544436 kB' 'SwapCached: 0 kB' 'Active: 7385408 kB' 'Inactive: 3689560 kB' 'Active(anon): 6986984 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527764 kB' 'Mapped: 171416 kB' 'Shmem: 6462516 kB' 'KReclaimable: 548392 kB' 'Slab: 1197508 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649116 kB' 'KernelStack: 22064 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8455316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.887 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.888 nr_hugepages=1024 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.888 resv_hugepages=0 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.888 surplus_hugepages=0 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.888 anon_hugepages=0 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.888 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43563420 kB' 'MemAvailable: 47552088 kB' 'Buffers: 6064 kB' 'Cached: 10544456 kB' 'SwapCached: 0 kB' 'Active: 7385412 kB' 'Inactive: 3689560 kB' 'Active(anon): 6986988 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527764 kB' 'Mapped: 171416 kB' 'Shmem: 6462536 kB' 'KReclaimable: 548392 kB' 'Slab: 1197508 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649116 kB' 'KernelStack: 22064 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8455336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.889 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26421828 kB' 'MemUsed: 6217312 kB' 'SwapCached: 0 kB' 'Active: 2854888 kB' 'Inactive: 231284 kB' 'Active(anon): 2721840 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710396 kB' 'Mapped: 76024 kB' 'AnonPages: 379016 kB' 'Shmem: 2346064 kB' 'KernelStack: 12456 kB' 'PageTables: 5400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 533368 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 313648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 17141592 kB' 'MemUsed: 10514488 kB' 'SwapCached: 0 kB' 'Active: 4530912 kB' 'Inactive: 3458276 kB' 'Active(anon): 4265536 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7840144 kB' 'Mapped: 95392 kB' 'AnonPages: 149112 kB' 'Shmem: 4116492 kB' 'KernelStack: 9608 kB' 'PageTables: 2928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328672 kB' 'Slab: 664140 kB' 'SReclaimable: 328672 kB' 'SUnreclaim: 335468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.890 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.891 node0=512 expecting 512 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:06.891 node1=512 expecting 512 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.891 00:04:06.891 real 0m3.803s 00:04:06.891 user 0m1.301s 00:04:06.891 sys 0m2.514s 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.891 11:45:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.891 ************************************ 00:04:06.891 END TEST even_2G_alloc 00:04:06.891 ************************************ 00:04:07.178 11:45:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:07.178 11:45:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.178 11:45:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.178 11:45:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.178 ************************************ 00:04:07.178 START TEST odd_alloc 00:04:07.178 ************************************ 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.178 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.179 11:45:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:11.397 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.397 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.397 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43573312 kB' 'MemAvailable: 47561980 kB' 'Buffers: 6064 kB' 'Cached: 10544576 kB' 'SwapCached: 0 kB' 'Active: 7386340 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987916 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528008 kB' 'Mapped: 171500 kB' 'Shmem: 6462656 kB' 'KReclaimable: 548392 kB' 'Slab: 1197752 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649360 kB' 'KernelStack: 22080 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8455948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.398 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43574008 kB' 'MemAvailable: 47562676 kB' 'Buffers: 6064 kB' 'Cached: 10544576 kB' 'SwapCached: 0 kB' 'Active: 7386192 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987768 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528328 kB' 'Mapped: 171424 kB' 'Shmem: 6462656 kB' 'KReclaimable: 548392 kB' 'Slab: 1197736 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649344 kB' 'KernelStack: 22064 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8455964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.399 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.400 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43573800 kB' 'MemAvailable: 47562468 kB' 'Buffers: 6064 kB' 'Cached: 10544580 kB' 'SwapCached: 0 kB' 'Active: 7385884 kB' 'Inactive: 3689560 kB' 'Active(anon): 6987460 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528016 kB' 'Mapped: 171424 kB' 'Shmem: 6462660 kB' 'KReclaimable: 548392 kB' 'Slab: 1197736 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649344 kB' 'KernelStack: 22064 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8455984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.401 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.402 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:11.403 nr_hugepages=1025 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.403 resv_hugepages=0 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.403 surplus_hugepages=0 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.403 anon_hugepages=0 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43574304 kB' 'MemAvailable: 47562972 kB' 'Buffers: 6064 kB' 'Cached: 10544580 kB' 'SwapCached: 0 kB' 'Active: 7386428 kB' 'Inactive: 3689560 kB' 'Active(anon): 6988004 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528560 kB' 'Mapped: 171424 kB' 'Shmem: 6462660 kB' 'KReclaimable: 548392 kB' 'Slab: 1197736 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649344 kB' 'KernelStack: 22080 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8456008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26418548 kB' 'MemUsed: 6220592 kB' 'SwapCached: 0 kB' 'Active: 2854336 kB' 'Inactive: 231284 kB' 'Active(anon): 2721288 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710448 kB' 'Mapped: 76032 kB' 'AnonPages: 378272 kB' 'Shmem: 2346116 kB' 'KernelStack: 12440 kB' 'PageTables: 5308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 533604 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 313884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 17155996 kB' 'MemUsed: 10500084 kB' 'SwapCached: 0 kB' 'Active: 4532580 kB' 'Inactive: 3458276 kB' 'Active(anon): 4267204 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7840256 kB' 'Mapped: 95416 kB' 'AnonPages: 150804 kB' 'Shmem: 4116604 kB' 'KernelStack: 9640 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328672 kB' 'Slab: 664132 kB' 'SReclaimable: 328672 kB' 'SUnreclaim: 335460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:11.408 node0=512 expecting 513 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:11.408 node1=513 expecting 512 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:11.408 00:04:11.408 real 0m3.986s 00:04:11.408 user 0m1.362s 00:04:11.408 sys 0m2.579s 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.408 11:45:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.408 ************************************ 00:04:11.408 END TEST odd_alloc 00:04:11.408 ************************************ 00:04:11.408 11:45:57 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:11.408 11:45:57 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.408 11:45:57 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.408 11:45:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.408 ************************************ 00:04:11.409 START TEST custom_alloc 00:04:11.409 ************************************ 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.409 11:45:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:15.609 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.609 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.609 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42516552 kB' 'MemAvailable: 46505220 kB' 'Buffers: 6064 kB' 'Cached: 10544740 kB' 'SwapCached: 0 kB' 'Active: 7388084 kB' 'Inactive: 3689560 kB' 'Active(anon): 6989660 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529664 kB' 'Mapped: 171536 kB' 'Shmem: 6462820 kB' 'KReclaimable: 548392 kB' 'Slab: 1197400 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 649008 kB' 'KernelStack: 22064 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8456888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.610 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42517508 kB' 'MemAvailable: 46506176 kB' 'Buffers: 6064 kB' 'Cached: 10544744 kB' 'SwapCached: 0 kB' 'Active: 7387248 kB' 'Inactive: 3689560 kB' 'Active(anon): 6988824 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529304 kB' 'Mapped: 171436 kB' 'Shmem: 6462824 kB' 'KReclaimable: 548392 kB' 'Slab: 1197384 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648992 kB' 'KernelStack: 22064 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8456904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.611 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.612 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42517556 kB' 'MemAvailable: 46506224 kB' 'Buffers: 6064 kB' 'Cached: 10544748 kB' 'SwapCached: 0 kB' 'Active: 7386948 kB' 'Inactive: 3689560 kB' 'Active(anon): 6988524 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529004 kB' 'Mapped: 171436 kB' 'Shmem: 6462828 kB' 'KReclaimable: 548392 kB' 'Slab: 1197384 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648992 kB' 'KernelStack: 22064 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8456928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.613 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.614 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:15.615 nr_hugepages=1536 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.615 resv_hugepages=0 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.615 surplus_hugepages=0 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.615 anon_hugepages=0 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42516800 kB' 'MemAvailable: 46505468 kB' 'Buffers: 6064 kB' 'Cached: 10544784 kB' 'SwapCached: 0 kB' 'Active: 7387628 kB' 'Inactive: 3689560 kB' 'Active(anon): 6989204 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529644 kB' 'Mapped: 171436 kB' 'Shmem: 6462864 kB' 'KReclaimable: 548392 kB' 'Slab: 1197384 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648992 kB' 'KernelStack: 22080 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8456948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.615 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.616 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26422564 kB' 'MemUsed: 6216576 kB' 'SwapCached: 0 kB' 'Active: 2857264 kB' 'Inactive: 231284 kB' 'Active(anon): 2724216 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710572 kB' 'Mapped: 76044 kB' 'AnonPages: 381192 kB' 'Shmem: 2346240 kB' 'KernelStack: 12472 kB' 'PageTables: 5452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 533396 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 313676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.617 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.618 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16094640 kB' 'MemUsed: 11561440 kB' 'SwapCached: 0 kB' 'Active: 4529988 kB' 'Inactive: 3458276 kB' 'Active(anon): 4264612 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7840296 kB' 'Mapped: 95392 kB' 'AnonPages: 148116 kB' 'Shmem: 4116644 kB' 'KernelStack: 9592 kB' 'PageTables: 2848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328672 kB' 'Slab: 663988 kB' 'SReclaimable: 328672 kB' 'SUnreclaim: 335316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.619 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.620 node0=512 expecting 512 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:15.620 node1=1024 expecting 1024 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:15.620 00:04:15.620 real 0m4.422s 00:04:15.620 user 0m1.767s 00:04:15.620 sys 0m2.738s 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.620 11:46:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.620 ************************************ 00:04:15.620 END TEST custom_alloc 00:04:15.620 ************************************ 00:04:15.620 11:46:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:15.620 11:46:01 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.620 11:46:01 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.620 11:46:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.620 ************************************ 00:04:15.620 START TEST no_shrink_alloc 00:04:15.620 ************************************ 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.620 11:46:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:19.816 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.816 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.816 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541756 kB' 'MemAvailable: 47530424 kB' 'Buffers: 6064 kB' 'Cached: 10545072 kB' 'SwapCached: 0 kB' 'Active: 7388120 kB' 'Inactive: 3689560 kB' 'Active(anon): 6989696 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529876 kB' 'Mapped: 171448 kB' 'Shmem: 6463152 kB' 'KReclaimable: 548392 kB' 'Slab: 1197052 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648660 kB' 'KernelStack: 22128 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8458160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.817 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541000 kB' 'MemAvailable: 47529668 kB' 'Buffers: 6064 kB' 'Cached: 10545076 kB' 'SwapCached: 0 kB' 'Active: 7388428 kB' 'Inactive: 3689560 kB' 'Active(anon): 6990004 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530196 kB' 'Mapped: 171448 kB' 'Shmem: 6463156 kB' 'KReclaimable: 548392 kB' 'Slab: 1197036 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648644 kB' 'KernelStack: 22080 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8458180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.818 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.819 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541708 kB' 'MemAvailable: 47530376 kB' 'Buffers: 6064 kB' 'Cached: 10545092 kB' 'SwapCached: 0 kB' 'Active: 7388424 kB' 'Inactive: 3689560 kB' 'Active(anon): 6990000 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530096 kB' 'Mapped: 171448 kB' 'Shmem: 6463172 kB' 'KReclaimable: 548392 kB' 'Slab: 1197108 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648716 kB' 'KernelStack: 22080 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8458200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.820 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.821 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.822 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.083 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.084 nr_hugepages=1024 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.084 resv_hugepages=0 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.084 surplus_hugepages=0 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.084 anon_hugepages=0 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541828 kB' 'MemAvailable: 47530496 kB' 'Buffers: 6064 kB' 'Cached: 10545112 kB' 'SwapCached: 0 kB' 'Active: 7388768 kB' 'Inactive: 3689560 kB' 'Active(anon): 6990344 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530440 kB' 'Mapped: 171448 kB' 'Shmem: 6463192 kB' 'KReclaimable: 548392 kB' 'Slab: 1197108 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648716 kB' 'KernelStack: 22096 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8458224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.084 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.085 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25376672 kB' 'MemUsed: 7262468 kB' 'SwapCached: 0 kB' 'Active: 2856596 kB' 'Inactive: 231284 kB' 'Active(anon): 2723548 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710876 kB' 'Mapped: 76056 kB' 'AnonPages: 380168 kB' 'Shmem: 2346544 kB' 'KernelStack: 12504 kB' 'PageTables: 5416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 533196 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 313476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.086 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:20.087 node0=1024 expecting 1024 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.087 11:46:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:23.376 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.376 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.376 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.377 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.377 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43543072 kB' 'MemAvailable: 47531740 kB' 'Buffers: 6064 kB' 'Cached: 10545220 kB' 'SwapCached: 0 kB' 'Active: 7390024 kB' 'Inactive: 3689560 kB' 'Active(anon): 6991600 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531500 kB' 'Mapped: 172692 kB' 'Shmem: 6463300 kB' 'KReclaimable: 548392 kB' 'Slab: 1196792 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648400 kB' 'KernelStack: 22160 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8493348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.641 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.642 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43543072 kB' 'MemAvailable: 47531740 kB' 'Buffers: 6064 kB' 'Cached: 10545220 kB' 'SwapCached: 0 kB' 'Active: 7390520 kB' 'Inactive: 3689560 kB' 'Active(anon): 6992096 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531984 kB' 'Mapped: 172344 kB' 'Shmem: 6463300 kB' 'KReclaimable: 548392 kB' 'Slab: 1196784 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648392 kB' 'KernelStack: 22160 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8493368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.643 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.644 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43543796 kB' 'MemAvailable: 47532464 kB' 'Buffers: 6064 kB' 'Cached: 10545240 kB' 'SwapCached: 0 kB' 'Active: 7390224 kB' 'Inactive: 3689560 kB' 'Active(anon): 6991800 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531676 kB' 'Mapped: 172344 kB' 'Shmem: 6463320 kB' 'KReclaimable: 548392 kB' 'Slab: 1196820 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648428 kB' 'KernelStack: 22160 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8493388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.645 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.646 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.647 nr_hugepages=1024 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.647 resv_hugepages=0 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.647 surplus_hugepages=0 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.647 anon_hugepages=0 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43544428 kB' 'MemAvailable: 47533096 kB' 'Buffers: 6064 kB' 'Cached: 10545264 kB' 'SwapCached: 0 kB' 'Active: 7390208 kB' 'Inactive: 3689560 kB' 'Active(anon): 6991784 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531628 kB' 'Mapped: 172344 kB' 'Shmem: 6463344 kB' 'KReclaimable: 548392 kB' 'Slab: 1196820 kB' 'SReclaimable: 548392 kB' 'SUnreclaim: 648428 kB' 'KernelStack: 22144 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8493412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.647 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.648 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25380276 kB' 'MemUsed: 7258864 kB' 'SwapCached: 0 kB' 'Active: 2855896 kB' 'Inactive: 231284 kB' 'Active(anon): 2722848 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2710972 kB' 'Mapped: 76836 kB' 'AnonPages: 379336 kB' 'Shmem: 2346640 kB' 'KernelStack: 12504 kB' 'PageTables: 5360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219720 kB' 'Slab: 532828 kB' 'SReclaimable: 219720 kB' 'SUnreclaim: 313108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.649 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.650 node0=1024 expecting 1024 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.650 00:04:23.650 real 0m8.034s 00:04:23.650 user 0m2.779s 00:04:23.650 sys 0m5.326s 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.650 11:46:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.650 ************************************ 00:04:23.650 END TEST no_shrink_alloc 00:04:23.650 ************************************ 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:23.650 11:46:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:23.650 00:04:23.650 real 0m31.333s 00:04:23.650 user 0m10.484s 00:04:23.650 sys 0m19.134s 00:04:23.650 11:46:09 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.650 11:46:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.650 ************************************ 00:04:23.650 END TEST hugepages 00:04:23.650 ************************************ 00:04:23.910 11:46:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:23.910 11:46:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.910 11:46:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.910 11:46:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.910 ************************************ 00:04:23.910 START TEST driver 00:04:23.910 ************************************ 00:04:23.910 11:46:09 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:23.910 * Looking for test storage... 00:04:23.910 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:23.910 11:46:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:23.910 11:46:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.910 11:46:09 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.480 11:46:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.480 11:46:15 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.480 11:46:15 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.480 11:46:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.480 ************************************ 00:04:30.480 START TEST guess_driver 00:04:30.480 ************************************ 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:30.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:30.480 Looking for driver=vfio-pci 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.480 11:46:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.769 11:46:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.674 11:46:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.241 00:04:42.241 real 0m11.769s 00:04:42.241 user 0m3.002s 00:04:42.241 sys 0m6.041s 00:04:42.241 11:46:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.241 11:46:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:42.241 ************************************ 00:04:42.241 END TEST guess_driver 00:04:42.241 ************************************ 00:04:42.241 00:04:42.241 real 0m17.474s 00:04:42.241 user 0m4.662s 00:04:42.241 sys 0m9.247s 00:04:42.241 11:46:27 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.241 11:46:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:42.241 ************************************ 00:04:42.241 END TEST driver 00:04:42.241 ************************************ 00:04:42.241 11:46:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:42.241 11:46:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.241 11:46:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.241 11:46:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.241 ************************************ 00:04:42.241 START TEST devices 00:04:42.241 ************************************ 00:04:42.241 11:46:27 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:42.241 * Looking for test storage... 00:04:42.241 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:42.241 11:46:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:42.241 11:46:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:42.241 11:46:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.241 11:46:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.430 11:46:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:46.430 11:46:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:46.431 11:46:32 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:46.431 No valid GPT data, bailing 00:04:46.431 11:46:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.431 11:46:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:46.431 11:46:32 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:46.431 11:46:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.431 11:46:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.431 ************************************ 00:04:46.431 START TEST nvme_mount 00:04:46.431 ************************************ 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.431 11:46:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:47.368 Creating new GPT entries in memory. 00:04:47.368 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.368 other utilities. 00:04:47.368 11:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.368 11:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.368 11:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.368 11:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.368 11:46:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:48.305 Creating new GPT entries in memory. 00:04:48.305 The operation has completed successfully. 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4020418 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.305 11:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.500 11:46:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.500 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.501 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.501 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.501 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.501 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.501 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.501 11:46:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.837 11:46:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.097 11:46:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:00.288 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.289 11:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.549 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.549 00:05:00.549 real 0m14.306s 00:05:00.549 user 0m4.025s 00:05:00.549 sys 0m8.168s 00:05:00.549 11:46:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.549 11:46:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:00.549 ************************************ 00:05:00.549 END TEST nvme_mount 00:05:00.549 ************************************ 00:05:00.549 11:46:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:00.549 11:46:46 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.549 11:46:46 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.549 11:46:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.549 ************************************ 00:05:00.549 START TEST dm_mount 00:05:00.549 ************************************ 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.549 11:46:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:01.486 Creating new GPT entries in memory. 00:05:01.486 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.486 other utilities. 00:05:01.486 11:46:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.486 11:46:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.486 11:46:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.486 11:46:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.486 11:46:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.424 Creating new GPT entries in memory. 00:05:02.424 The operation has completed successfully. 00:05:02.424 11:46:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.424 11:46:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.424 11:46:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.424 11:46:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.424 11:46:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:03.803 The operation has completed successfully. 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4025590 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.803 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.804 11:46:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.030 11:46:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.221 11:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:12.221 11:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.221 11:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:12.221 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.221 11:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.221 11:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:12.221 00:05:12.221 real 0m11.535s 00:05:12.221 user 0m2.989s 00:05:12.221 sys 0m5.673s 00:05:12.221 11:46:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.221 11:46:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:12.221 ************************************ 00:05:12.221 END TEST dm_mount 00:05:12.221 ************************************ 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.221 11:46:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.480 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:12.480 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:12.480 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.480 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.480 11:46:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:12.480 00:05:12.480 real 0m31.019s 00:05:12.480 user 0m8.796s 00:05:12.480 sys 0m17.175s 00:05:12.480 11:46:58 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.480 11:46:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:12.480 ************************************ 00:05:12.480 END TEST devices 00:05:12.480 ************************************ 00:05:12.480 00:05:12.480 real 1m49.908s 00:05:12.480 user 0m33.351s 00:05:12.480 sys 1m4.022s 00:05:12.480 11:46:58 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.480 11:46:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.480 ************************************ 00:05:12.480 END TEST setup.sh 00:05:12.480 ************************************ 00:05:12.480 11:46:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:05:16.737 Hugepages 00:05:16.737 node hugesize free / total 00:05:16.737 node0 1048576kB 0 / 0 00:05:16.737 node0 2048kB 1024 / 1024 00:05:16.737 node1 1048576kB 0 / 0 00:05:16.737 node1 2048kB 1024 / 1024 00:05:16.737 00:05:16.737 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:16.737 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:16.737 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:16.737 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:16.737 11:47:02 -- spdk/autotest.sh@130 -- # uname -s 00:05:16.737 11:47:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:16.737 11:47:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:16.737 11:47:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:20.922 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:20.922 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:23.459 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.459 11:47:09 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:24.028 11:47:10 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:24.028 11:47:10 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:24.028 11:47:10 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.028 11:47:10 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:24.028 11:47:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.028 11:47:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.028 11:47:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.028 11:47:10 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.028 11:47:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.287 11:47:10 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.287 11:47:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:24.287 11:47:10 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.475 Waiting for block devices as requested 00:05:28.475 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:28.475 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:28.475 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:28.475 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:28.475 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:28.733 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:28.733 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:28.733 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:28.992 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:28.992 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:28.992 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.249 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.249 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.249 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.508 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:29.508 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:29.508 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.767 11:47:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:29.767 11:47:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:29.767 11:47:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:29.767 11:47:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:29.767 11:47:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:29.767 11:47:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:29.767 11:47:15 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:29.767 11:47:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:29.767 11:47:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:29.767 11:47:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:29.767 11:47:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:29.767 11:47:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:29.767 11:47:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:29.767 11:47:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:29.767 11:47:15 -- common/autotest_common.sh@1557 -- # continue 00:05:29.767 11:47:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:29.767 11:47:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.767 11:47:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.767 11:47:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:29.767 11:47:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.767 11:47:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.767 11:47:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:33.958 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:33.958 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:35.862 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:35.862 11:47:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:35.862 11:47:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.862 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:36.120 11:47:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:36.120 11:47:21 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:36.120 11:47:21 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.120 11:47:21 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:36.120 11:47:21 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:36.120 11:47:21 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:36.120 11:47:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:36.120 11:47:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:36.120 11:47:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.120 11:47:21 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.120 11:47:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:36.120 11:47:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:36.120 11:47:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:36.120 11:47:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:36.120 11:47:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:36.120 11:47:22 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:36.120 11:47:22 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:36.120 11:47:22 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:36.120 11:47:22 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:36.120 11:47:22 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:36.120 11:47:22 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4036877 00:05:36.120 11:47:22 -- common/autotest_common.sh@1598 -- # waitforlisten 4036877 00:05:36.120 11:47:22 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.120 11:47:22 -- common/autotest_common.sh@831 -- # '[' -z 4036877 ']' 00:05:36.120 11:47:22 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.120 11:47:22 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.120 11:47:22 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.120 11:47:22 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.120 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:36.120 [2024-07-25 11:47:22.188289] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:36.120 [2024-07-25 11:47:22.188334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036877 ] 00:05:36.378 [2024-07-25 11:47:22.295340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.378 [2024-07-25 11:47:22.378341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.968 11:47:23 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.968 11:47:23 -- common/autotest_common.sh@864 -- # return 0 00:05:36.968 11:47:23 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:36.968 11:47:23 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:36.968 11:47:23 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:40.249 nvme0n1 00:05:40.249 11:47:26 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:40.508 [2024-07-25 11:47:26.381223] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:40.508 request: 00:05:40.508 { 00:05:40.508 "nvme_ctrlr_name": "nvme0", 00:05:40.508 "password": "test", 00:05:40.508 "method": "bdev_nvme_opal_revert", 00:05:40.508 "req_id": 1 00:05:40.508 } 00:05:40.508 Got JSON-RPC error response 00:05:40.508 response: 00:05:40.508 { 00:05:40.508 "code": -32602, 00:05:40.508 "message": "Invalid parameters" 00:05:40.508 } 00:05:40.508 11:47:26 -- common/autotest_common.sh@1604 -- # true 00:05:40.508 11:47:26 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:40.508 11:47:26 -- common/autotest_common.sh@1608 -- # killprocess 4036877 00:05:40.508 11:47:26 -- common/autotest_common.sh@950 -- # '[' -z 4036877 ']' 00:05:40.508 11:47:26 -- common/autotest_common.sh@954 -- # kill -0 4036877 00:05:40.508 11:47:26 -- common/autotest_common.sh@955 -- # uname 00:05:40.508 11:47:26 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.508 11:47:26 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4036877 00:05:40.508 11:47:26 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.508 11:47:26 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.508 11:47:26 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4036877' 00:05:40.508 killing process with pid 4036877 00:05:40.508 11:47:26 -- common/autotest_common.sh@969 -- # kill 4036877 00:05:40.508 11:47:26 -- common/autotest_common.sh@974 -- # wait 4036877 00:05:43.038 11:47:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:43.038 11:47:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:43.038 11:47:29 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:43.038 11:47:29 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:43.038 11:47:29 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:43.974 Restarting all devices. 00:05:50.539 lstat() error: No such file or directory 00:05:50.539 QAT Error: No GENERAL section found 00:05:50.539 Failed to configure qat_dev0 00:05:50.539 lstat() error: No such file or directory 00:05:50.539 QAT Error: No GENERAL section found 00:05:50.539 Failed to configure qat_dev1 00:05:50.539 lstat() error: No such file or directory 00:05:50.539 QAT Error: No GENERAL section found 00:05:50.539 Failed to configure qat_dev2 00:05:50.539 lstat() error: No such file or directory 00:05:50.539 QAT Error: No GENERAL section found 00:05:50.539 Failed to configure qat_dev3 00:05:50.539 lstat() error: No such file or directory 00:05:50.539 QAT Error: No GENERAL section found 00:05:50.539 Failed to configure qat_dev4 00:05:50.539 enable sriov 00:05:50.539 Checking status of all devices. 00:05:50.539 There is 5 QAT acceleration device(s) in the system: 00:05:50.539 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:50.539 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:50.539 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:50.539 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:50.539 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:50.539 0000:1a:00.0 set to 16 VFs 00:05:51.106 0000:1c:00.0 set to 16 VFs 00:05:52.042 0000:1e:00.0 set to 16 VFs 00:05:52.979 0000:3d:00.0 set to 16 VFs 00:05:53.545 0000:3f:00.0 set to 16 VFs 00:05:56.071 Properly configured the qat device with driver uio_pci_generic. 00:05:56.071 11:47:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:56.071 11:47:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:56.071 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:56.071 11:47:41 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:56.071 11:47:41 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:56.071 11:47:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.071 11:47:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.071 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:56.071 ************************************ 00:05:56.071 START TEST env 00:05:56.071 ************************************ 00:05:56.071 11:47:42 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:56.071 * Looking for test storage... 00:05:56.071 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:56.071 11:47:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:56.071 11:47:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.071 11:47:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.071 11:47:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.071 ************************************ 00:05:56.071 START TEST env_memory 00:05:56.071 ************************************ 00:05:56.071 11:47:42 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:56.071 00:05:56.071 00:05:56.071 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.071 http://cunit.sourceforge.net/ 00:05:56.071 00:05:56.071 00:05:56.071 Suite: memory 00:05:56.329 Test: alloc and free memory map ...[2024-07-25 11:47:42.221042] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:56.329 passed 00:05:56.329 Test: mem map translation ...[2024-07-25 11:47:42.247933] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:56.329 [2024-07-25 11:47:42.247956] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:56.329 [2024-07-25 11:47:42.248007] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:56.329 [2024-07-25 11:47:42.248020] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:56.329 passed 00:05:56.329 Test: mem map registration ...[2024-07-25 11:47:42.301163] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:56.329 [2024-07-25 11:47:42.301186] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:56.329 passed 00:05:56.329 Test: mem map adjacent registrations ...passed 00:05:56.329 00:05:56.329 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.329 suites 1 1 n/a 0 0 00:05:56.329 tests 4 4 4 0 0 00:05:56.329 asserts 152 152 152 0 n/a 00:05:56.329 00:05:56.329 Elapsed time = 0.185 seconds 00:05:56.329 00:05:56.329 real 0m0.199s 00:05:56.329 user 0m0.187s 00:05:56.329 sys 0m0.012s 00:05:56.329 11:47:42 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.329 11:47:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:56.329 ************************************ 00:05:56.329 END TEST env_memory 00:05:56.329 ************************************ 00:05:56.329 11:47:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:56.329 11:47:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.329 11:47:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.329 11:47:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.590 ************************************ 00:05:56.590 START TEST env_vtophys 00:05:56.590 ************************************ 00:05:56.590 11:47:42 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:56.590 EAL: lib.eal log level changed from notice to debug 00:05:56.590 EAL: Detected lcore 0 as core 0 on socket 0 00:05:56.590 EAL: Detected lcore 1 as core 1 on socket 0 00:05:56.590 EAL: Detected lcore 2 as core 2 on socket 0 00:05:56.590 EAL: Detected lcore 3 as core 3 on socket 0 00:05:56.590 EAL: Detected lcore 4 as core 4 on socket 0 00:05:56.590 EAL: Detected lcore 5 as core 5 on socket 0 00:05:56.590 EAL: Detected lcore 6 as core 6 on socket 0 00:05:56.590 EAL: Detected lcore 7 as core 8 on socket 0 00:05:56.590 EAL: Detected lcore 8 as core 9 on socket 0 00:05:56.590 EAL: Detected lcore 9 as core 10 on socket 0 00:05:56.590 EAL: Detected lcore 10 as core 11 on socket 0 00:05:56.590 EAL: Detected lcore 11 as core 12 on socket 0 00:05:56.590 EAL: Detected lcore 12 as core 13 on socket 0 00:05:56.590 EAL: Detected lcore 13 as core 14 on socket 0 00:05:56.590 EAL: Detected lcore 14 as core 16 on socket 0 00:05:56.590 EAL: Detected lcore 15 as core 17 on socket 0 00:05:56.590 EAL: Detected lcore 16 as core 18 on socket 0 00:05:56.590 EAL: Detected lcore 17 as core 19 on socket 0 00:05:56.590 EAL: Detected lcore 18 as core 20 on socket 0 00:05:56.590 EAL: Detected lcore 19 as core 21 on socket 0 00:05:56.590 EAL: Detected lcore 20 as core 22 on socket 0 00:05:56.590 EAL: Detected lcore 21 as core 24 on socket 0 00:05:56.590 EAL: Detected lcore 22 as core 25 on socket 0 00:05:56.590 EAL: Detected lcore 23 as core 26 on socket 0 00:05:56.590 EAL: Detected lcore 24 as core 27 on socket 0 00:05:56.590 EAL: Detected lcore 25 as core 28 on socket 0 00:05:56.590 EAL: Detected lcore 26 as core 29 on socket 0 00:05:56.590 EAL: Detected lcore 27 as core 30 on socket 0 00:05:56.590 EAL: Detected lcore 28 as core 0 on socket 1 00:05:56.590 EAL: Detected lcore 29 as core 1 on socket 1 00:05:56.590 EAL: Detected lcore 30 as core 2 on socket 1 00:05:56.590 EAL: Detected lcore 31 as core 3 on socket 1 00:05:56.590 EAL: Detected lcore 32 as core 4 on socket 1 00:05:56.590 EAL: Detected lcore 33 as core 5 on socket 1 00:05:56.590 EAL: Detected lcore 34 as core 6 on socket 1 00:05:56.590 EAL: Detected lcore 35 as core 8 on socket 1 00:05:56.590 EAL: Detected lcore 36 as core 9 on socket 1 00:05:56.590 EAL: Detected lcore 37 as core 10 on socket 1 00:05:56.590 EAL: Detected lcore 38 as core 11 on socket 1 00:05:56.590 EAL: Detected lcore 39 as core 12 on socket 1 00:05:56.590 EAL: Detected lcore 40 as core 13 on socket 1 00:05:56.590 EAL: Detected lcore 41 as core 14 on socket 1 00:05:56.590 EAL: Detected lcore 42 as core 16 on socket 1 00:05:56.590 EAL: Detected lcore 43 as core 17 on socket 1 00:05:56.590 EAL: Detected lcore 44 as core 18 on socket 1 00:05:56.590 EAL: Detected lcore 45 as core 19 on socket 1 00:05:56.590 EAL: Detected lcore 46 as core 20 on socket 1 00:05:56.590 EAL: Detected lcore 47 as core 21 on socket 1 00:05:56.590 EAL: Detected lcore 48 as core 22 on socket 1 00:05:56.590 EAL: Detected lcore 49 as core 24 on socket 1 00:05:56.590 EAL: Detected lcore 50 as core 25 on socket 1 00:05:56.590 EAL: Detected lcore 51 as core 26 on socket 1 00:05:56.590 EAL: Detected lcore 52 as core 27 on socket 1 00:05:56.590 EAL: Detected lcore 53 as core 28 on socket 1 00:05:56.590 EAL: Detected lcore 54 as core 29 on socket 1 00:05:56.590 EAL: Detected lcore 55 as core 30 on socket 1 00:05:56.590 EAL: Detected lcore 56 as core 0 on socket 0 00:05:56.590 EAL: Detected lcore 57 as core 1 on socket 0 00:05:56.590 EAL: Detected lcore 58 as core 2 on socket 0 00:05:56.590 EAL: Detected lcore 59 as core 3 on socket 0 00:05:56.590 EAL: Detected lcore 60 as core 4 on socket 0 00:05:56.590 EAL: Detected lcore 61 as core 5 on socket 0 00:05:56.590 EAL: Detected lcore 62 as core 6 on socket 0 00:05:56.590 EAL: Detected lcore 63 as core 8 on socket 0 00:05:56.590 EAL: Detected lcore 64 as core 9 on socket 0 00:05:56.590 EAL: Detected lcore 65 as core 10 on socket 0 00:05:56.590 EAL: Detected lcore 66 as core 11 on socket 0 00:05:56.590 EAL: Detected lcore 67 as core 12 on socket 0 00:05:56.590 EAL: Detected lcore 68 as core 13 on socket 0 00:05:56.590 EAL: Detected lcore 69 as core 14 on socket 0 00:05:56.590 EAL: Detected lcore 70 as core 16 on socket 0 00:05:56.590 EAL: Detected lcore 71 as core 17 on socket 0 00:05:56.590 EAL: Detected lcore 72 as core 18 on socket 0 00:05:56.590 EAL: Detected lcore 73 as core 19 on socket 0 00:05:56.590 EAL: Detected lcore 74 as core 20 on socket 0 00:05:56.590 EAL: Detected lcore 75 as core 21 on socket 0 00:05:56.590 EAL: Detected lcore 76 as core 22 on socket 0 00:05:56.590 EAL: Detected lcore 77 as core 24 on socket 0 00:05:56.590 EAL: Detected lcore 78 as core 25 on socket 0 00:05:56.590 EAL: Detected lcore 79 as core 26 on socket 0 00:05:56.590 EAL: Detected lcore 80 as core 27 on socket 0 00:05:56.590 EAL: Detected lcore 81 as core 28 on socket 0 00:05:56.590 EAL: Detected lcore 82 as core 29 on socket 0 00:05:56.590 EAL: Detected lcore 83 as core 30 on socket 0 00:05:56.590 EAL: Detected lcore 84 as core 0 on socket 1 00:05:56.590 EAL: Detected lcore 85 as core 1 on socket 1 00:05:56.590 EAL: Detected lcore 86 as core 2 on socket 1 00:05:56.590 EAL: Detected lcore 87 as core 3 on socket 1 00:05:56.590 EAL: Detected lcore 88 as core 4 on socket 1 00:05:56.590 EAL: Detected lcore 89 as core 5 on socket 1 00:05:56.590 EAL: Detected lcore 90 as core 6 on socket 1 00:05:56.590 EAL: Detected lcore 91 as core 8 on socket 1 00:05:56.590 EAL: Detected lcore 92 as core 9 on socket 1 00:05:56.590 EAL: Detected lcore 93 as core 10 on socket 1 00:05:56.590 EAL: Detected lcore 94 as core 11 on socket 1 00:05:56.590 EAL: Detected lcore 95 as core 12 on socket 1 00:05:56.590 EAL: Detected lcore 96 as core 13 on socket 1 00:05:56.590 EAL: Detected lcore 97 as core 14 on socket 1 00:05:56.590 EAL: Detected lcore 98 as core 16 on socket 1 00:05:56.590 EAL: Detected lcore 99 as core 17 on socket 1 00:05:56.590 EAL: Detected lcore 100 as core 18 on socket 1 00:05:56.590 EAL: Detected lcore 101 as core 19 on socket 1 00:05:56.590 EAL: Detected lcore 102 as core 20 on socket 1 00:05:56.590 EAL: Detected lcore 103 as core 21 on socket 1 00:05:56.590 EAL: Detected lcore 104 as core 22 on socket 1 00:05:56.590 EAL: Detected lcore 105 as core 24 on socket 1 00:05:56.590 EAL: Detected lcore 106 as core 25 on socket 1 00:05:56.590 EAL: Detected lcore 107 as core 26 on socket 1 00:05:56.590 EAL: Detected lcore 108 as core 27 on socket 1 00:05:56.590 EAL: Detected lcore 109 as core 28 on socket 1 00:05:56.590 EAL: Detected lcore 110 as core 29 on socket 1 00:05:56.591 EAL: Detected lcore 111 as core 30 on socket 1 00:05:56.591 EAL: Maximum logical cores by configuration: 128 00:05:56.591 EAL: Detected CPU lcores: 112 00:05:56.591 EAL: Detected NUMA nodes: 2 00:05:56.591 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:56.591 EAL: Detected shared linkage of DPDK 00:05:56.591 EAL: No shared files mode enabled, IPC will be disabled 00:05:56.591 EAL: No shared files mode enabled, IPC is disabled 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:56.591 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:56.591 EAL: Bus pci wants IOVA as 'PA' 00:05:56.591 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:56.591 EAL: Bus vdev wants IOVA as 'DC' 00:05:56.591 EAL: Selected IOVA mode 'PA' 00:05:56.591 EAL: Probing VFIO support... 00:05:56.591 EAL: IOMMU type 1 (Type 1) is supported 00:05:56.591 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:56.591 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:56.591 EAL: VFIO support initialized 00:05:56.591 EAL: Ask a virtual area of 0x2e000 bytes 00:05:56.591 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:56.591 EAL: Setting up physically contiguous memory... 00:05:56.591 EAL: Setting maximum number of open files to 524288 00:05:56.591 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:56.591 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:56.591 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:56.591 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:56.591 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.591 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:56.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.591 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.591 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:56.591 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:56.592 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.592 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:56.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.592 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.592 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:56.592 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:56.592 EAL: Hugepages will be freed exactly as allocated. 00:05:56.592 EAL: No shared files mode enabled, IPC is disabled 00:05:56.592 EAL: No shared files mode enabled, IPC is disabled 00:05:56.592 EAL: TSC frequency is ~2500000 KHz 00:05:56.592 EAL: Main lcore 0 is ready (tid=7f041713eb00;cpuset=[0]) 00:05:56.592 EAL: Trying to obtain current memory policy. 00:05:56.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.592 EAL: Restoring previous memory policy: 0 00:05:56.592 EAL: request: mp_malloc_sync 00:05:56.592 EAL: No shared files mode enabled, IPC is disabled 00:05:56.592 EAL: Heap on socket 0 was expanded by 2MB 00:05:56.592 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001000000 00:05:56.592 EAL: PCI memory mapped at 0x202001001000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001002000 00:05:56.592 EAL: PCI memory mapped at 0x202001003000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001004000 00:05:56.592 EAL: PCI memory mapped at 0x202001005000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001006000 00:05:56.592 EAL: PCI memory mapped at 0x202001007000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001008000 00:05:56.592 EAL: PCI memory mapped at 0x202001009000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200100a000 00:05:56.592 EAL: PCI memory mapped at 0x20200100b000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200100c000 00:05:56.592 EAL: PCI memory mapped at 0x20200100d000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200100e000 00:05:56.592 EAL: PCI memory mapped at 0x20200100f000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001010000 00:05:56.592 EAL: PCI memory mapped at 0x202001011000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001012000 00:05:56.592 EAL: PCI memory mapped at 0x202001013000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001014000 00:05:56.592 EAL: PCI memory mapped at 0x202001015000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001016000 00:05:56.592 EAL: PCI memory mapped at 0x202001017000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001018000 00:05:56.592 EAL: PCI memory mapped at 0x202001019000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200101a000 00:05:56.592 EAL: PCI memory mapped at 0x20200101b000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200101c000 00:05:56.592 EAL: PCI memory mapped at 0x20200101d000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:56.592 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200101e000 00:05:56.592 EAL: PCI memory mapped at 0x20200101f000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001020000 00:05:56.592 EAL: PCI memory mapped at 0x202001021000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001022000 00:05:56.592 EAL: PCI memory mapped at 0x202001023000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001024000 00:05:56.592 EAL: PCI memory mapped at 0x202001025000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001026000 00:05:56.592 EAL: PCI memory mapped at 0x202001027000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001028000 00:05:56.592 EAL: PCI memory mapped at 0x202001029000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200102a000 00:05:56.592 EAL: PCI memory mapped at 0x20200102b000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200102c000 00:05:56.592 EAL: PCI memory mapped at 0x20200102d000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200102e000 00:05:56.592 EAL: PCI memory mapped at 0x20200102f000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001030000 00:05:56.592 EAL: PCI memory mapped at 0x202001031000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001032000 00:05:56.592 EAL: PCI memory mapped at 0x202001033000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001034000 00:05:56.592 EAL: PCI memory mapped at 0x202001035000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001036000 00:05:56.592 EAL: PCI memory mapped at 0x202001037000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x202001038000 00:05:56.592 EAL: PCI memory mapped at 0x202001039000 00:05:56.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:56.592 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:56.592 EAL: probe driver: 8086:37c9 qat 00:05:56.592 EAL: PCI memory mapped at 0x20200103a000 00:05:56.593 EAL: PCI memory mapped at 0x20200103b000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:56.593 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200103c000 00:05:56.593 EAL: PCI memory mapped at 0x20200103d000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:56.593 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200103e000 00:05:56.593 EAL: PCI memory mapped at 0x20200103f000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001040000 00:05:56.593 EAL: PCI memory mapped at 0x202001041000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001042000 00:05:56.593 EAL: PCI memory mapped at 0x202001043000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001044000 00:05:56.593 EAL: PCI memory mapped at 0x202001045000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001046000 00:05:56.593 EAL: PCI memory mapped at 0x202001047000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001048000 00:05:56.593 EAL: PCI memory mapped at 0x202001049000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200104a000 00:05:56.593 EAL: PCI memory mapped at 0x20200104b000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200104c000 00:05:56.593 EAL: PCI memory mapped at 0x20200104d000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200104e000 00:05:56.593 EAL: PCI memory mapped at 0x20200104f000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001050000 00:05:56.593 EAL: PCI memory mapped at 0x202001051000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001052000 00:05:56.593 EAL: PCI memory mapped at 0x202001053000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001054000 00:05:56.593 EAL: PCI memory mapped at 0x202001055000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001056000 00:05:56.593 EAL: PCI memory mapped at 0x202001057000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001058000 00:05:56.593 EAL: PCI memory mapped at 0x202001059000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200105a000 00:05:56.593 EAL: PCI memory mapped at 0x20200105b000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200105c000 00:05:56.593 EAL: PCI memory mapped at 0x20200105d000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:56.593 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200105e000 00:05:56.593 EAL: PCI memory mapped at 0x20200105f000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:56.593 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001060000 00:05:56.593 EAL: PCI memory mapped at 0x202001061000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x202001060000 00:05:56.593 EAL: PCI memory unmapped at 0x202001061000 00:05:56.593 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001062000 00:05:56.593 EAL: PCI memory mapped at 0x202001063000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x202001062000 00:05:56.593 EAL: PCI memory unmapped at 0x202001063000 00:05:56.593 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001064000 00:05:56.593 EAL: PCI memory mapped at 0x202001065000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x202001064000 00:05:56.593 EAL: PCI memory unmapped at 0x202001065000 00:05:56.593 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001066000 00:05:56.593 EAL: PCI memory mapped at 0x202001067000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x202001066000 00:05:56.593 EAL: PCI memory unmapped at 0x202001067000 00:05:56.593 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x202001068000 00:05:56.593 EAL: PCI memory mapped at 0x202001069000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x202001068000 00:05:56.593 EAL: PCI memory unmapped at 0x202001069000 00:05:56.593 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200106a000 00:05:56.593 EAL: PCI memory mapped at 0x20200106b000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x20200106a000 00:05:56.593 EAL: PCI memory unmapped at 0x20200106b000 00:05:56.593 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200106c000 00:05:56.593 EAL: PCI memory mapped at 0x20200106d000 00:05:56.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:56.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.593 EAL: PCI memory unmapped at 0x20200106c000 00:05:56.593 EAL: PCI memory unmapped at 0x20200106d000 00:05:56.593 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:56.593 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:56.593 EAL: probe driver: 8086:37c9 qat 00:05:56.593 EAL: PCI memory mapped at 0x20200106e000 00:05:56.594 EAL: PCI memory mapped at 0x20200106f000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200106e000 00:05:56.594 EAL: PCI memory unmapped at 0x20200106f000 00:05:56.594 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001070000 00:05:56.594 EAL: PCI memory mapped at 0x202001071000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001070000 00:05:56.594 EAL: PCI memory unmapped at 0x202001071000 00:05:56.594 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001072000 00:05:56.594 EAL: PCI memory mapped at 0x202001073000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001072000 00:05:56.594 EAL: PCI memory unmapped at 0x202001073000 00:05:56.594 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001074000 00:05:56.594 EAL: PCI memory mapped at 0x202001075000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001074000 00:05:56.594 EAL: PCI memory unmapped at 0x202001075000 00:05:56.594 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001076000 00:05:56.594 EAL: PCI memory mapped at 0x202001077000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001076000 00:05:56.594 EAL: PCI memory unmapped at 0x202001077000 00:05:56.594 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001078000 00:05:56.594 EAL: PCI memory mapped at 0x202001079000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001078000 00:05:56.594 EAL: PCI memory unmapped at 0x202001079000 00:05:56.594 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x20200107a000 00:05:56.594 EAL: PCI memory mapped at 0x20200107b000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200107a000 00:05:56.594 EAL: PCI memory unmapped at 0x20200107b000 00:05:56.594 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x20200107c000 00:05:56.594 EAL: PCI memory mapped at 0x20200107d000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200107c000 00:05:56.594 EAL: PCI memory unmapped at 0x20200107d000 00:05:56.594 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:56.594 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x20200107e000 00:05:56.594 EAL: PCI memory mapped at 0x20200107f000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200107e000 00:05:56.594 EAL: PCI memory unmapped at 0x20200107f000 00:05:56.594 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001080000 00:05:56.594 EAL: PCI memory mapped at 0x202001081000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001080000 00:05:56.594 EAL: PCI memory unmapped at 0x202001081000 00:05:56.594 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001082000 00:05:56.594 EAL: PCI memory mapped at 0x202001083000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001082000 00:05:56.594 EAL: PCI memory unmapped at 0x202001083000 00:05:56.594 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001084000 00:05:56.594 EAL: PCI memory mapped at 0x202001085000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001084000 00:05:56.594 EAL: PCI memory unmapped at 0x202001085000 00:05:56.594 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001086000 00:05:56.594 EAL: PCI memory mapped at 0x202001087000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001086000 00:05:56.594 EAL: PCI memory unmapped at 0x202001087000 00:05:56.594 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x202001088000 00:05:56.594 EAL: PCI memory mapped at 0x202001089000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x202001088000 00:05:56.594 EAL: PCI memory unmapped at 0x202001089000 00:05:56.594 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x20200108a000 00:05:56.594 EAL: PCI memory mapped at 0x20200108b000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200108a000 00:05:56.594 EAL: PCI memory unmapped at 0x20200108b000 00:05:56.594 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x20200108c000 00:05:56.594 EAL: PCI memory mapped at 0x20200108d000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200108c000 00:05:56.594 EAL: PCI memory unmapped at 0x20200108d000 00:05:56.594 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:56.594 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:56.594 EAL: probe driver: 8086:37c9 qat 00:05:56.594 EAL: PCI memory mapped at 0x20200108e000 00:05:56.594 EAL: PCI memory mapped at 0x20200108f000 00:05:56.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:56.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.594 EAL: PCI memory unmapped at 0x20200108e000 00:05:56.594 EAL: PCI memory unmapped at 0x20200108f000 00:05:56.594 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x202001090000 00:05:56.595 EAL: PCI memory mapped at 0x202001091000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x202001090000 00:05:56.595 EAL: PCI memory unmapped at 0x202001091000 00:05:56.595 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x202001092000 00:05:56.595 EAL: PCI memory mapped at 0x202001093000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x202001092000 00:05:56.595 EAL: PCI memory unmapped at 0x202001093000 00:05:56.595 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x202001094000 00:05:56.595 EAL: PCI memory mapped at 0x202001095000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x202001094000 00:05:56.595 EAL: PCI memory unmapped at 0x202001095000 00:05:56.595 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x202001096000 00:05:56.595 EAL: PCI memory mapped at 0x202001097000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x202001096000 00:05:56.595 EAL: PCI memory unmapped at 0x202001097000 00:05:56.595 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x202001098000 00:05:56.595 EAL: PCI memory mapped at 0x202001099000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x202001098000 00:05:56.595 EAL: PCI memory unmapped at 0x202001099000 00:05:56.595 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x20200109a000 00:05:56.595 EAL: PCI memory mapped at 0x20200109b000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x20200109a000 00:05:56.595 EAL: PCI memory unmapped at 0x20200109b000 00:05:56.595 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x20200109c000 00:05:56.595 EAL: PCI memory mapped at 0x20200109d000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x20200109c000 00:05:56.595 EAL: PCI memory unmapped at 0x20200109d000 00:05:56.595 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:56.595 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:56.595 EAL: probe driver: 8086:37c9 qat 00:05:56.595 EAL: PCI memory mapped at 0x20200109e000 00:05:56.595 EAL: PCI memory mapped at 0x20200109f000 00:05:56.595 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:56.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.595 EAL: PCI memory unmapped at 0x20200109e000 00:05:56.595 EAL: PCI memory unmapped at 0x20200109f000 00:05:56.595 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:56.595 EAL: Mem event callback 'spdk:(nil)' registered 00:05:56.595 00:05:56.595 00:05:56.595 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.595 http://cunit.sourceforge.net/ 00:05:56.595 00:05:56.595 00:05:56.595 Suite: components_suite 00:05:56.595 Test: vtophys_malloc_test ...passed 00:05:56.595 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:56.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.595 EAL: Restoring previous memory policy: 4 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was expanded by 4MB 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was shrunk by 4MB 00:05:56.595 EAL: Trying to obtain current memory policy. 00:05:56.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.595 EAL: Restoring previous memory policy: 4 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was expanded by 6MB 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was shrunk by 6MB 00:05:56.595 EAL: Trying to obtain current memory policy. 00:05:56.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.595 EAL: Restoring previous memory policy: 4 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was expanded by 10MB 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was shrunk by 10MB 00:05:56.595 EAL: Trying to obtain current memory policy. 00:05:56.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.595 EAL: Restoring previous memory policy: 4 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.595 EAL: request: mp_malloc_sync 00:05:56.595 EAL: No shared files mode enabled, IPC is disabled 00:05:56.595 EAL: Heap on socket 0 was expanded by 18MB 00:05:56.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.596 EAL: request: mp_malloc_sync 00:05:56.596 EAL: No shared files mode enabled, IPC is disabled 00:05:56.596 EAL: Heap on socket 0 was shrunk by 18MB 00:05:56.596 EAL: Trying to obtain current memory policy. 00:05:56.596 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.596 EAL: Restoring previous memory policy: 4 00:05:56.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.596 EAL: request: mp_malloc_sync 00:05:56.596 EAL: No shared files mode enabled, IPC is disabled 00:05:56.596 EAL: Heap on socket 0 was expanded by 34MB 00:05:56.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.596 EAL: request: mp_malloc_sync 00:05:56.596 EAL: No shared files mode enabled, IPC is disabled 00:05:56.596 EAL: Heap on socket 0 was shrunk by 34MB 00:05:56.596 EAL: Trying to obtain current memory policy. 00:05:56.596 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.596 EAL: Restoring previous memory policy: 4 00:05:56.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.596 EAL: request: mp_malloc_sync 00:05:56.596 EAL: No shared files mode enabled, IPC is disabled 00:05:56.596 EAL: Heap on socket 0 was expanded by 66MB 00:05:56.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.596 EAL: request: mp_malloc_sync 00:05:56.596 EAL: No shared files mode enabled, IPC is disabled 00:05:56.596 EAL: Heap on socket 0 was shrunk by 66MB 00:05:56.596 EAL: Trying to obtain current memory policy. 00:05:56.596 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.596 EAL: Restoring previous memory policy: 4 00:05:56.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.596 EAL: request: mp_malloc_sync 00:05:56.596 EAL: No shared files mode enabled, IPC is disabled 00:05:56.596 EAL: Heap on socket 0 was expanded by 130MB 00:05:56.887 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.887 EAL: request: mp_malloc_sync 00:05:56.887 EAL: No shared files mode enabled, IPC is disabled 00:05:56.887 EAL: Heap on socket 0 was shrunk by 130MB 00:05:56.887 EAL: Trying to obtain current memory policy. 00:05:56.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.887 EAL: Restoring previous memory policy: 4 00:05:56.887 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.887 EAL: request: mp_malloc_sync 00:05:56.887 EAL: No shared files mode enabled, IPC is disabled 00:05:56.887 EAL: Heap on socket 0 was expanded by 258MB 00:05:56.887 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.887 EAL: request: mp_malloc_sync 00:05:56.887 EAL: No shared files mode enabled, IPC is disabled 00:05:56.887 EAL: Heap on socket 0 was shrunk by 258MB 00:05:56.887 EAL: Trying to obtain current memory policy. 00:05:56.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.887 EAL: Restoring previous memory policy: 4 00:05:56.887 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.887 EAL: request: mp_malloc_sync 00:05:56.887 EAL: No shared files mode enabled, IPC is disabled 00:05:56.887 EAL: Heap on socket 0 was expanded by 514MB 00:05:57.149 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.149 EAL: request: mp_malloc_sync 00:05:57.149 EAL: No shared files mode enabled, IPC is disabled 00:05:57.149 EAL: Heap on socket 0 was shrunk by 514MB 00:05:57.149 EAL: Trying to obtain current memory policy. 00:05:57.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.418 EAL: Restoring previous memory policy: 4 00:05:57.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.418 EAL: request: mp_malloc_sync 00:05:57.419 EAL: No shared files mode enabled, IPC is disabled 00:05:57.419 EAL: Heap on socket 0 was expanded by 1026MB 00:05:57.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.679 EAL: request: mp_malloc_sync 00:05:57.679 EAL: No shared files mode enabled, IPC is disabled 00:05:57.679 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:57.679 passed 00:05:57.679 00:05:57.679 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.679 suites 1 1 n/a 0 0 00:05:57.679 tests 2 2 2 0 0 00:05:57.679 asserts 6597 6597 6597 0 n/a 00:05:57.679 00:05:57.679 Elapsed time = 1.018 seconds 00:05:57.679 EAL: No shared files mode enabled, IPC is disabled 00:05:57.679 EAL: No shared files mode enabled, IPC is disabled 00:05:57.679 EAL: No shared files mode enabled, IPC is disabled 00:05:57.679 00:05:57.679 real 0m1.221s 00:05:57.679 user 0m0.689s 00:05:57.679 sys 0m0.503s 00:05:57.679 11:47:43 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.679 11:47:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:57.679 ************************************ 00:05:57.679 END TEST env_vtophys 00:05:57.679 ************************************ 00:05:57.679 11:47:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:57.679 11:47:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.679 11:47:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.679 11:47:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.679 ************************************ 00:05:57.679 START TEST env_pci 00:05:57.679 ************************************ 00:05:57.679 11:47:43 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:57.679 00:05:57.679 00:05:57.679 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.679 http://cunit.sourceforge.net/ 00:05:57.679 00:05:57.679 00:05:57.679 Suite: pci 00:05:57.679 Test: pci_hook ...[2024-07-25 11:47:43.782921] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4040726 has claimed it 00:05:57.938 EAL: Cannot find device (10000:00:01.0) 00:05:57.938 EAL: Failed to attach device on primary process 00:05:57.938 passed 00:05:57.938 00:05:57.938 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.938 suites 1 1 n/a 0 0 00:05:57.938 tests 1 1 1 0 0 00:05:57.938 asserts 25 25 25 0 n/a 00:05:57.938 00:05:57.938 Elapsed time = 0.046 seconds 00:05:57.938 00:05:57.938 real 0m0.075s 00:05:57.938 user 0m0.024s 00:05:57.938 sys 0m0.051s 00:05:57.938 11:47:43 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.938 11:47:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:57.938 ************************************ 00:05:57.938 END TEST env_pci 00:05:57.938 ************************************ 00:05:57.938 11:47:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:57.938 11:47:43 env -- env/env.sh@15 -- # uname 00:05:57.938 11:47:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:57.938 11:47:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:57.938 11:47:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.938 11:47:43 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:57.938 11:47:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.938 11:47:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.939 ************************************ 00:05:57.939 START TEST env_dpdk_post_init 00:05:57.939 ************************************ 00:05:57.939 11:47:43 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.939 EAL: Detected CPU lcores: 112 00:05:57.939 EAL: Detected NUMA nodes: 2 00:05:57.939 EAL: Detected shared linkage of DPDK 00:05:57.939 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.939 EAL: Selected IOVA mode 'PA' 00:05:57.939 EAL: VFIO support initialized 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:57.939 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.939 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:57.939 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:57.940 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:57.940 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:57.940 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:57.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.940 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:57.940 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:58.199 EAL: Using IOMMU type 1 (Type 1) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:58.199 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:58.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:58.199 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:58.199 EAL: Ignore mapping IO port bar(1) 00:05:58.199 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:58.458 EAL: Ignore mapping IO port bar(1) 00:05:58.458 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:58.458 EAL: Ignore mapping IO port bar(1) 00:05:58.458 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:58.458 EAL: Ignore mapping IO port bar(1) 00:05:58.458 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:59.025 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:03.212 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:03.212 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:06:03.472 Starting DPDK initialization... 00:06:03.472 Starting SPDK post initialization... 00:06:03.472 SPDK NVMe probe 00:06:03.472 Attaching to 0000:d8:00.0 00:06:03.472 Attached to 0000:d8:00.0 00:06:03.472 Cleaning up... 00:06:03.472 00:06:03.472 real 0m5.458s 00:06:03.472 user 0m4.050s 00:06:03.472 sys 0m0.459s 00:06:03.472 11:47:49 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.472 11:47:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.472 ************************************ 00:06:03.472 END TEST env_dpdk_post_init 00:06:03.472 ************************************ 00:06:03.472 11:47:49 env -- env/env.sh@26 -- # uname 00:06:03.472 11:47:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:03.472 11:47:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.472 11:47:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.472 11:47:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.472 11:47:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.472 ************************************ 00:06:03.472 START TEST env_mem_callbacks 00:06:03.472 ************************************ 00:06:03.472 11:47:49 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.472 EAL: Detected CPU lcores: 112 00:06:03.472 EAL: Detected NUMA nodes: 2 00:06:03.472 EAL: Detected shared linkage of DPDK 00:06:03.472 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.472 EAL: Selected IOVA mode 'PA' 00:06:03.472 EAL: VFIO support initialized 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.472 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:03.472 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.472 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.473 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.473 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:03.473 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:03.474 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:03.474 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:03.474 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:03.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.474 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:03.475 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.475 00:06:03.475 00:06:03.475 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.475 http://cunit.sourceforge.net/ 00:06:03.475 00:06:03.475 00:06:03.475 Suite: memory 00:06:03.475 Test: test ... 00:06:03.475 register 0x200000200000 2097152 00:06:03.475 malloc 3145728 00:06:03.475 register 0x200000400000 4194304 00:06:03.475 buf 0x200000500000 len 3145728 PASSED 00:06:03.475 malloc 64 00:06:03.475 buf 0x2000004fff40 len 64 PASSED 00:06:03.475 malloc 4194304 00:06:03.475 register 0x200000800000 6291456 00:06:03.475 buf 0x200000a00000 len 4194304 PASSED 00:06:03.475 free 0x200000500000 3145728 00:06:03.475 free 0x2000004fff40 64 00:06:03.475 unregister 0x200000400000 4194304 PASSED 00:06:03.475 free 0x200000a00000 4194304 00:06:03.475 unregister 0x200000800000 6291456 PASSED 00:06:03.475 malloc 8388608 00:06:03.475 register 0x200000400000 10485760 00:06:03.475 buf 0x200000600000 len 8388608 PASSED 00:06:03.475 free 0x200000600000 8388608 00:06:03.475 unregister 0x200000400000 10485760 PASSED 00:06:03.475 passed 00:06:03.475 00:06:03.475 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.475 suites 1 1 n/a 0 0 00:06:03.475 tests 1 1 1 0 0 00:06:03.475 asserts 15 15 15 0 n/a 00:06:03.475 00:06:03.475 Elapsed time = 0.006 seconds 00:06:03.475 00:06:03.475 real 0m0.113s 00:06:03.475 user 0m0.031s 00:06:03.475 sys 0m0.081s 00:06:03.475 11:47:49 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.475 11:47:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:03.475 ************************************ 00:06:03.475 END TEST env_mem_callbacks 00:06:03.475 ************************************ 00:06:03.734 00:06:03.734 real 0m7.592s 00:06:03.734 user 0m5.172s 00:06:03.734 sys 0m1.483s 00:06:03.734 11:47:49 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.734 11:47:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.734 ************************************ 00:06:03.734 END TEST env 00:06:03.734 ************************************ 00:06:03.734 11:47:49 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.734 11:47:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.734 11:47:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.734 11:47:49 -- common/autotest_common.sh@10 -- # set +x 00:06:03.734 ************************************ 00:06:03.734 START TEST rpc 00:06:03.734 ************************************ 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.734 * Looking for test storage... 00:06:03.734 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:03.734 11:47:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4041891 00:06:03.734 11:47:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.734 11:47:49 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:03.734 11:47:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4041891 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@831 -- # '[' -z 4041891 ']' 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.734 11:47:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.993 [2024-07-25 11:47:49.881068] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:03.993 [2024-07-25 11:47:49.881133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041891 ] 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:03.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:03.993 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:03.993 [2024-07-25 11:47:50.012493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.993 [2024-07-25 11:47:50.106900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:03.993 [2024-07-25 11:47:50.106941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4041891' to capture a snapshot of events at runtime. 00:06:03.993 [2024-07-25 11:47:50.106954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.993 [2024-07-25 11:47:50.106966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.993 [2024-07-25 11:47:50.106975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4041891 for offline analysis/debug. 00:06:03.993 [2024-07-25 11:47:50.107010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.928 11:47:50 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.928 11:47:50 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.928 11:47:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:04.928 11:47:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:04.928 11:47:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.928 11:47:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.928 11:47:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.928 11:47:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.928 11:47:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.928 ************************************ 00:06:04.928 START TEST rpc_integrity 00:06:04.928 ************************************ 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.928 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.928 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.929 { 00:06:04.929 "name": "Malloc0", 00:06:04.929 "aliases": [ 00:06:04.929 "d4633074-50b7-483d-99d5-6ef21450f1ba" 00:06:04.929 ], 00:06:04.929 "product_name": "Malloc disk", 00:06:04.929 "block_size": 512, 00:06:04.929 "num_blocks": 16384, 00:06:04.929 "uuid": "d4633074-50b7-483d-99d5-6ef21450f1ba", 00:06:04.929 "assigned_rate_limits": { 00:06:04.929 "rw_ios_per_sec": 0, 00:06:04.929 "rw_mbytes_per_sec": 0, 00:06:04.929 "r_mbytes_per_sec": 0, 00:06:04.929 "w_mbytes_per_sec": 0 00:06:04.929 }, 00:06:04.929 "claimed": false, 00:06:04.929 "zoned": false, 00:06:04.929 "supported_io_types": { 00:06:04.929 "read": true, 00:06:04.929 "write": true, 00:06:04.929 "unmap": true, 00:06:04.929 "flush": true, 00:06:04.929 "reset": true, 00:06:04.929 "nvme_admin": false, 00:06:04.929 "nvme_io": false, 00:06:04.929 "nvme_io_md": false, 00:06:04.929 "write_zeroes": true, 00:06:04.929 "zcopy": true, 00:06:04.929 "get_zone_info": false, 00:06:04.929 "zone_management": false, 00:06:04.929 "zone_append": false, 00:06:04.929 "compare": false, 00:06:04.929 "compare_and_write": false, 00:06:04.929 "abort": true, 00:06:04.929 "seek_hole": false, 00:06:04.929 "seek_data": false, 00:06:04.929 "copy": true, 00:06:04.929 "nvme_iov_md": false 00:06:04.929 }, 00:06:04.929 "memory_domains": [ 00:06:04.929 { 00:06:04.929 "dma_device_id": "system", 00:06:04.929 "dma_device_type": 1 00:06:04.929 }, 00:06:04.929 { 00:06:04.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.929 "dma_device_type": 2 00:06:04.929 } 00:06:04.929 ], 00:06:04.929 "driver_specific": {} 00:06:04.929 } 00:06:04.929 ]' 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.929 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.929 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.929 [2024-07-25 11:47:50.958433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.929 [2024-07-25 11:47:50.958471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.929 [2024-07-25 11:47:50.958487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17fd5f0 00:06:04.929 [2024-07-25 11:47:50.958498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.929 [2024-07-25 11:47:50.959939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.929 [2024-07-25 11:47:50.959965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.929 Passthru0 00:06:04.929 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.929 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.929 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.929 11:47:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.929 { 00:06:04.929 "name": "Malloc0", 00:06:04.929 "aliases": [ 00:06:04.929 "d4633074-50b7-483d-99d5-6ef21450f1ba" 00:06:04.929 ], 00:06:04.929 "product_name": "Malloc disk", 00:06:04.929 "block_size": 512, 00:06:04.929 "num_blocks": 16384, 00:06:04.929 "uuid": "d4633074-50b7-483d-99d5-6ef21450f1ba", 00:06:04.929 "assigned_rate_limits": { 00:06:04.929 "rw_ios_per_sec": 0, 00:06:04.929 "rw_mbytes_per_sec": 0, 00:06:04.929 "r_mbytes_per_sec": 0, 00:06:04.929 "w_mbytes_per_sec": 0 00:06:04.929 }, 00:06:04.929 "claimed": true, 00:06:04.929 "claim_type": "exclusive_write", 00:06:04.929 "zoned": false, 00:06:04.929 "supported_io_types": { 00:06:04.929 "read": true, 00:06:04.929 "write": true, 00:06:04.929 "unmap": true, 00:06:04.929 "flush": true, 00:06:04.929 "reset": true, 00:06:04.929 "nvme_admin": false, 00:06:04.929 "nvme_io": false, 00:06:04.929 "nvme_io_md": false, 00:06:04.929 "write_zeroes": true, 00:06:04.929 "zcopy": true, 00:06:04.929 "get_zone_info": false, 00:06:04.929 "zone_management": false, 00:06:04.929 "zone_append": false, 00:06:04.929 "compare": false, 00:06:04.929 "compare_and_write": false, 00:06:04.929 "abort": true, 00:06:04.929 "seek_hole": false, 00:06:04.929 "seek_data": false, 00:06:04.929 "copy": true, 00:06:04.929 "nvme_iov_md": false 00:06:04.929 }, 00:06:04.929 "memory_domains": [ 00:06:04.929 { 00:06:04.929 "dma_device_id": "system", 00:06:04.929 "dma_device_type": 1 00:06:04.929 }, 00:06:04.929 { 00:06:04.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.929 "dma_device_type": 2 00:06:04.929 } 00:06:04.929 ], 00:06:04.929 "driver_specific": {} 00:06:04.929 }, 00:06:04.929 { 00:06:04.929 "name": "Passthru0", 00:06:04.929 "aliases": [ 00:06:04.929 "de23af19-82ba-539b-ac82-54f9555c3f19" 00:06:04.929 ], 00:06:04.929 "product_name": "passthru", 00:06:04.929 "block_size": 512, 00:06:04.929 "num_blocks": 16384, 00:06:04.929 "uuid": "de23af19-82ba-539b-ac82-54f9555c3f19", 00:06:04.929 "assigned_rate_limits": { 00:06:04.929 "rw_ios_per_sec": 0, 00:06:04.929 "rw_mbytes_per_sec": 0, 00:06:04.929 "r_mbytes_per_sec": 0, 00:06:04.929 "w_mbytes_per_sec": 0 00:06:04.929 }, 00:06:04.929 "claimed": false, 00:06:04.929 "zoned": false, 00:06:04.929 "supported_io_types": { 00:06:04.929 "read": true, 00:06:04.929 "write": true, 00:06:04.929 "unmap": true, 00:06:04.929 "flush": true, 00:06:04.929 "reset": true, 00:06:04.929 "nvme_admin": false, 00:06:04.929 "nvme_io": false, 00:06:04.929 "nvme_io_md": false, 00:06:04.929 "write_zeroes": true, 00:06:04.929 "zcopy": true, 00:06:04.929 "get_zone_info": false, 00:06:04.929 "zone_management": false, 00:06:04.929 "zone_append": false, 00:06:04.929 "compare": false, 00:06:04.929 "compare_and_write": false, 00:06:04.929 "abort": true, 00:06:04.929 "seek_hole": false, 00:06:04.929 "seek_data": false, 00:06:04.929 "copy": true, 00:06:04.929 "nvme_iov_md": false 00:06:04.929 }, 00:06:04.929 "memory_domains": [ 00:06:04.929 { 00:06:04.929 "dma_device_id": "system", 00:06:04.929 "dma_device_type": 1 00:06:04.929 }, 00:06:04.929 { 00:06:04.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.929 "dma_device_type": 2 00:06:04.929 } 00:06:04.929 ], 00:06:04.929 "driver_specific": { 00:06:04.929 "passthru": { 00:06:04.929 "name": "Passthru0", 00:06:04.929 "base_bdev_name": "Malloc0" 00:06:04.929 } 00:06:04.929 } 00:06:04.929 } 00:06:04.929 ]' 00:06:04.929 11:47:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:04.929 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.929 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.929 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.929 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.188 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.188 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.188 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.188 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.189 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.189 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.189 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.189 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.189 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.189 11:47:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.189 00:06:05.189 real 0m0.304s 00:06:05.189 user 0m0.190s 00:06:05.189 sys 0m0.053s 00:06:05.189 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.189 11:47:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 ************************************ 00:06:05.189 END TEST rpc_integrity 00:06:05.189 ************************************ 00:06:05.189 11:47:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.189 11:47:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.189 11:47:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.189 11:47:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 ************************************ 00:06:05.189 START TEST rpc_plugins 00:06:05.189 ************************************ 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.189 { 00:06:05.189 "name": "Malloc1", 00:06:05.189 "aliases": [ 00:06:05.189 "75b7e303-1bdf-4e6e-afe6-72cf2e60498c" 00:06:05.189 ], 00:06:05.189 "product_name": "Malloc disk", 00:06:05.189 "block_size": 4096, 00:06:05.189 "num_blocks": 256, 00:06:05.189 "uuid": "75b7e303-1bdf-4e6e-afe6-72cf2e60498c", 00:06:05.189 "assigned_rate_limits": { 00:06:05.189 "rw_ios_per_sec": 0, 00:06:05.189 "rw_mbytes_per_sec": 0, 00:06:05.189 "r_mbytes_per_sec": 0, 00:06:05.189 "w_mbytes_per_sec": 0 00:06:05.189 }, 00:06:05.189 "claimed": false, 00:06:05.189 "zoned": false, 00:06:05.189 "supported_io_types": { 00:06:05.189 "read": true, 00:06:05.189 "write": true, 00:06:05.189 "unmap": true, 00:06:05.189 "flush": true, 00:06:05.189 "reset": true, 00:06:05.189 "nvme_admin": false, 00:06:05.189 "nvme_io": false, 00:06:05.189 "nvme_io_md": false, 00:06:05.189 "write_zeroes": true, 00:06:05.189 "zcopy": true, 00:06:05.189 "get_zone_info": false, 00:06:05.189 "zone_management": false, 00:06:05.189 "zone_append": false, 00:06:05.189 "compare": false, 00:06:05.189 "compare_and_write": false, 00:06:05.189 "abort": true, 00:06:05.189 "seek_hole": false, 00:06:05.189 "seek_data": false, 00:06:05.189 "copy": true, 00:06:05.189 "nvme_iov_md": false 00:06:05.189 }, 00:06:05.189 "memory_domains": [ 00:06:05.189 { 00:06:05.189 "dma_device_id": "system", 00:06:05.189 "dma_device_type": 1 00:06:05.189 }, 00:06:05.189 { 00:06:05.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.189 "dma_device_type": 2 00:06:05.189 } 00:06:05.189 ], 00:06:05.189 "driver_specific": {} 00:06:05.189 } 00:06:05.189 ]' 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.189 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.189 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:05.448 11:47:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:05.448 00:06:05.448 real 0m0.145s 00:06:05.448 user 0m0.087s 00:06:05.448 sys 0m0.025s 00:06:05.448 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.448 11:47:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.448 ************************************ 00:06:05.448 END TEST rpc_plugins 00:06:05.448 ************************************ 00:06:05.448 11:47:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:05.448 11:47:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.448 11:47:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.448 11:47:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.448 ************************************ 00:06:05.448 START TEST rpc_trace_cmd_test 00:06:05.448 ************************************ 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:05.448 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4041891", 00:06:05.448 "tpoint_group_mask": "0x8", 00:06:05.448 "iscsi_conn": { 00:06:05.448 "mask": "0x2", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "scsi": { 00:06:05.448 "mask": "0x4", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "bdev": { 00:06:05.448 "mask": "0x8", 00:06:05.448 "tpoint_mask": "0xffffffffffffffff" 00:06:05.448 }, 00:06:05.448 "nvmf_rdma": { 00:06:05.448 "mask": "0x10", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "nvmf_tcp": { 00:06:05.448 "mask": "0x20", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "ftl": { 00:06:05.448 "mask": "0x40", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "blobfs": { 00:06:05.448 "mask": "0x80", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "dsa": { 00:06:05.448 "mask": "0x200", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "thread": { 00:06:05.448 "mask": "0x400", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "nvme_pcie": { 00:06:05.448 "mask": "0x800", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "iaa": { 00:06:05.448 "mask": "0x1000", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "nvme_tcp": { 00:06:05.448 "mask": "0x2000", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "bdev_nvme": { 00:06:05.448 "mask": "0x4000", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 }, 00:06:05.448 "sock": { 00:06:05.448 "mask": "0x8000", 00:06:05.448 "tpoint_mask": "0x0" 00:06:05.448 } 00:06:05.448 }' 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.448 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.705 00:06:05.705 real 0m0.242s 00:06:05.705 user 0m0.199s 00:06:05.705 sys 0m0.037s 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.705 11:47:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 ************************************ 00:06:05.705 END TEST rpc_trace_cmd_test 00:06:05.705 ************************************ 00:06:05.705 11:47:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.705 11:47:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.705 11:47:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.705 11:47:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.705 11:47:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.705 11:47:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 ************************************ 00:06:05.705 START TEST rpc_daemon_integrity 00:06:05.705 ************************************ 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.705 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.964 { 00:06:05.964 "name": "Malloc2", 00:06:05.964 "aliases": [ 00:06:05.964 "2fc77e02-3c2f-4e7a-a799-f07e2017f00e" 00:06:05.964 ], 00:06:05.964 "product_name": "Malloc disk", 00:06:05.964 "block_size": 512, 00:06:05.964 "num_blocks": 16384, 00:06:05.964 "uuid": "2fc77e02-3c2f-4e7a-a799-f07e2017f00e", 00:06:05.964 "assigned_rate_limits": { 00:06:05.964 "rw_ios_per_sec": 0, 00:06:05.964 "rw_mbytes_per_sec": 0, 00:06:05.964 "r_mbytes_per_sec": 0, 00:06:05.964 "w_mbytes_per_sec": 0 00:06:05.964 }, 00:06:05.964 "claimed": false, 00:06:05.964 "zoned": false, 00:06:05.964 "supported_io_types": { 00:06:05.964 "read": true, 00:06:05.964 "write": true, 00:06:05.964 "unmap": true, 00:06:05.964 "flush": true, 00:06:05.964 "reset": true, 00:06:05.964 "nvme_admin": false, 00:06:05.964 "nvme_io": false, 00:06:05.964 "nvme_io_md": false, 00:06:05.964 "write_zeroes": true, 00:06:05.964 "zcopy": true, 00:06:05.964 "get_zone_info": false, 00:06:05.964 "zone_management": false, 00:06:05.964 "zone_append": false, 00:06:05.964 "compare": false, 00:06:05.964 "compare_and_write": false, 00:06:05.964 "abort": true, 00:06:05.964 "seek_hole": false, 00:06:05.964 "seek_data": false, 00:06:05.964 "copy": true, 00:06:05.964 "nvme_iov_md": false 00:06:05.964 }, 00:06:05.964 "memory_domains": [ 00:06:05.964 { 00:06:05.964 "dma_device_id": "system", 00:06:05.964 "dma_device_type": 1 00:06:05.964 }, 00:06:05.964 { 00:06:05.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.964 "dma_device_type": 2 00:06:05.964 } 00:06:05.964 ], 00:06:05.964 "driver_specific": {} 00:06:05.964 } 00:06:05.964 ]' 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.964 [2024-07-25 11:47:51.889058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.964 [2024-07-25 11:47:51.889094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.964 [2024-07-25 11:47:51.889110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a8fb0 00:06:05.964 [2024-07-25 11:47:51.889122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.964 [2024-07-25 11:47:51.890392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.964 [2024-07-25 11:47:51.890418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.964 Passthru0 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.964 { 00:06:05.964 "name": "Malloc2", 00:06:05.964 "aliases": [ 00:06:05.964 "2fc77e02-3c2f-4e7a-a799-f07e2017f00e" 00:06:05.964 ], 00:06:05.964 "product_name": "Malloc disk", 00:06:05.964 "block_size": 512, 00:06:05.964 "num_blocks": 16384, 00:06:05.964 "uuid": "2fc77e02-3c2f-4e7a-a799-f07e2017f00e", 00:06:05.964 "assigned_rate_limits": { 00:06:05.964 "rw_ios_per_sec": 0, 00:06:05.964 "rw_mbytes_per_sec": 0, 00:06:05.964 "r_mbytes_per_sec": 0, 00:06:05.964 "w_mbytes_per_sec": 0 00:06:05.964 }, 00:06:05.964 "claimed": true, 00:06:05.964 "claim_type": "exclusive_write", 00:06:05.964 "zoned": false, 00:06:05.964 "supported_io_types": { 00:06:05.964 "read": true, 00:06:05.964 "write": true, 00:06:05.964 "unmap": true, 00:06:05.964 "flush": true, 00:06:05.964 "reset": true, 00:06:05.964 "nvme_admin": false, 00:06:05.964 "nvme_io": false, 00:06:05.964 "nvme_io_md": false, 00:06:05.964 "write_zeroes": true, 00:06:05.964 "zcopy": true, 00:06:05.964 "get_zone_info": false, 00:06:05.964 "zone_management": false, 00:06:05.964 "zone_append": false, 00:06:05.964 "compare": false, 00:06:05.964 "compare_and_write": false, 00:06:05.964 "abort": true, 00:06:05.964 "seek_hole": false, 00:06:05.964 "seek_data": false, 00:06:05.964 "copy": true, 00:06:05.964 "nvme_iov_md": false 00:06:05.964 }, 00:06:05.964 "memory_domains": [ 00:06:05.964 { 00:06:05.964 "dma_device_id": "system", 00:06:05.964 "dma_device_type": 1 00:06:05.964 }, 00:06:05.964 { 00:06:05.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.964 "dma_device_type": 2 00:06:05.964 } 00:06:05.964 ], 00:06:05.964 "driver_specific": {} 00:06:05.964 }, 00:06:05.964 { 00:06:05.964 "name": "Passthru0", 00:06:05.964 "aliases": [ 00:06:05.964 "ba7a9095-b866-5d2c-9941-fb503b832a45" 00:06:05.964 ], 00:06:05.964 "product_name": "passthru", 00:06:05.964 "block_size": 512, 00:06:05.964 "num_blocks": 16384, 00:06:05.964 "uuid": "ba7a9095-b866-5d2c-9941-fb503b832a45", 00:06:05.964 "assigned_rate_limits": { 00:06:05.964 "rw_ios_per_sec": 0, 00:06:05.964 "rw_mbytes_per_sec": 0, 00:06:05.964 "r_mbytes_per_sec": 0, 00:06:05.964 "w_mbytes_per_sec": 0 00:06:05.964 }, 00:06:05.964 "claimed": false, 00:06:05.964 "zoned": false, 00:06:05.964 "supported_io_types": { 00:06:05.964 "read": true, 00:06:05.964 "write": true, 00:06:05.964 "unmap": true, 00:06:05.964 "flush": true, 00:06:05.964 "reset": true, 00:06:05.964 "nvme_admin": false, 00:06:05.964 "nvme_io": false, 00:06:05.964 "nvme_io_md": false, 00:06:05.964 "write_zeroes": true, 00:06:05.964 "zcopy": true, 00:06:05.964 "get_zone_info": false, 00:06:05.964 "zone_management": false, 00:06:05.964 "zone_append": false, 00:06:05.964 "compare": false, 00:06:05.964 "compare_and_write": false, 00:06:05.964 "abort": true, 00:06:05.964 "seek_hole": false, 00:06:05.964 "seek_data": false, 00:06:05.964 "copy": true, 00:06:05.964 "nvme_iov_md": false 00:06:05.964 }, 00:06:05.964 "memory_domains": [ 00:06:05.964 { 00:06:05.964 "dma_device_id": "system", 00:06:05.964 "dma_device_type": 1 00:06:05.964 }, 00:06:05.964 { 00:06:05.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.964 "dma_device_type": 2 00:06:05.964 } 00:06:05.964 ], 00:06:05.964 "driver_specific": { 00:06:05.964 "passthru": { 00:06:05.964 "name": "Passthru0", 00:06:05.964 "base_bdev_name": "Malloc2" 00:06:05.964 } 00:06:05.964 } 00:06:05.964 } 00:06:05.964 ]' 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.964 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.965 11:47:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.965 11:47:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.965 11:47:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.965 11:47:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.965 11:47:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.965 00:06:05.965 real 0m0.300s 00:06:05.965 user 0m0.187s 00:06:05.965 sys 0m0.052s 00:06:05.965 11:47:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.965 11:47:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.965 ************************************ 00:06:05.965 END TEST rpc_daemon_integrity 00:06:05.965 ************************************ 00:06:06.223 11:47:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.223 11:47:52 rpc -- rpc/rpc.sh@84 -- # killprocess 4041891 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@950 -- # '[' -z 4041891 ']' 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@954 -- # kill -0 4041891 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4041891 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4041891' 00:06:06.223 killing process with pid 4041891 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@969 -- # kill 4041891 00:06:06.223 11:47:52 rpc -- common/autotest_common.sh@974 -- # wait 4041891 00:06:06.482 00:06:06.482 real 0m2.766s 00:06:06.482 user 0m3.521s 00:06:06.482 sys 0m0.902s 00:06:06.482 11:47:52 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.482 11:47:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.482 ************************************ 00:06:06.482 END TEST rpc 00:06:06.482 ************************************ 00:06:06.482 11:47:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.482 11:47:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.482 11:47:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.482 11:47:52 -- common/autotest_common.sh@10 -- # set +x 00:06:06.482 ************************************ 00:06:06.482 START TEST skip_rpc 00:06:06.482 ************************************ 00:06:06.482 11:47:52 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.741 * Looking for test storage... 00:06:06.741 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:06.741 11:47:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:06.741 11:47:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:06.741 11:47:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:06.741 11:47:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.741 11:47:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.741 11:47:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.741 ************************************ 00:06:06.741 START TEST skip_rpc 00:06:06.741 ************************************ 00:06:06.741 11:47:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:06.741 11:47:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4042554 00:06:06.741 11:47:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.741 11:47:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.741 11:47:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.741 [2024-07-25 11:47:52.773629] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:06.741 [2024-07-25 11:47:52.773687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042554 ] 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:06.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.741 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:07.000 [2024-07-25 11:47:52.910426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.000 [2024-07-25 11:47:52.994894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4042554 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 4042554 ']' 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 4042554 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4042554 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4042554' 00:06:12.265 killing process with pid 4042554 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 4042554 00:06:12.265 11:47:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 4042554 00:06:12.265 00:06:12.265 real 0m5.404s 00:06:12.265 user 0m5.070s 00:06:12.265 sys 0m0.354s 00:06:12.265 11:47:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.265 11:47:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.265 ************************************ 00:06:12.265 END TEST skip_rpc 00:06:12.265 ************************************ 00:06:12.265 11:47:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:12.265 11:47:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.265 11:47:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.265 11:47:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.265 ************************************ 00:06:12.265 START TEST skip_rpc_with_json 00:06:12.265 ************************************ 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4043627 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4043627 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 4043627 ']' 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.265 11:47:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.265 [2024-07-25 11:47:58.258816] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:12.265 [2024-07-25 11:47:58.258876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043627 ] 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:12.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.265 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:12.523 [2024-07-25 11:47:58.391565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.523 [2024-07-25 11:47:58.477702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.089 [2024-07-25 11:47:59.151858] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:13.089 request: 00:06:13.089 { 00:06:13.089 "trtype": "tcp", 00:06:13.089 "method": "nvmf_get_transports", 00:06:13.089 "req_id": 1 00:06:13.089 } 00:06:13.089 Got JSON-RPC error response 00:06:13.089 response: 00:06:13.089 { 00:06:13.089 "code": -19, 00:06:13.089 "message": "No such device" 00:06:13.089 } 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.089 [2024-07-25 11:47:59.163997] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.089 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.347 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.347 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:13.347 { 00:06:13.347 "subsystems": [ 00:06:13.347 { 00:06:13.347 "subsystem": "keyring", 00:06:13.347 "config": [] 00:06:13.347 }, 00:06:13.347 { 00:06:13.347 "subsystem": "iobuf", 00:06:13.347 "config": [ 00:06:13.347 { 00:06:13.347 "method": "iobuf_set_options", 00:06:13.347 "params": { 00:06:13.347 "small_pool_count": 8192, 00:06:13.347 "large_pool_count": 1024, 00:06:13.347 "small_bufsize": 8192, 00:06:13.347 "large_bufsize": 135168 00:06:13.347 } 00:06:13.347 } 00:06:13.347 ] 00:06:13.347 }, 00:06:13.347 { 00:06:13.347 "subsystem": "sock", 00:06:13.347 "config": [ 00:06:13.347 { 00:06:13.347 "method": "sock_set_default_impl", 00:06:13.347 "params": { 00:06:13.347 "impl_name": "posix" 00:06:13.347 } 00:06:13.347 }, 00:06:13.347 { 00:06:13.347 "method": "sock_impl_set_options", 00:06:13.347 "params": { 00:06:13.347 "impl_name": "ssl", 00:06:13.347 "recv_buf_size": 4096, 00:06:13.347 "send_buf_size": 4096, 00:06:13.347 "enable_recv_pipe": true, 00:06:13.347 "enable_quickack": false, 00:06:13.347 "enable_placement_id": 0, 00:06:13.347 "enable_zerocopy_send_server": true, 00:06:13.347 "enable_zerocopy_send_client": false, 00:06:13.347 "zerocopy_threshold": 0, 00:06:13.347 "tls_version": 0, 00:06:13.347 "enable_ktls": false 00:06:13.347 } 00:06:13.347 }, 00:06:13.347 { 00:06:13.348 "method": "sock_impl_set_options", 00:06:13.348 "params": { 00:06:13.348 "impl_name": "posix", 00:06:13.348 "recv_buf_size": 2097152, 00:06:13.348 "send_buf_size": 2097152, 00:06:13.348 "enable_recv_pipe": true, 00:06:13.348 "enable_quickack": false, 00:06:13.348 "enable_placement_id": 0, 00:06:13.348 "enable_zerocopy_send_server": true, 00:06:13.348 "enable_zerocopy_send_client": false, 00:06:13.348 "zerocopy_threshold": 0, 00:06:13.348 "tls_version": 0, 00:06:13.348 "enable_ktls": false 00:06:13.348 } 00:06:13.348 } 00:06:13.348 ] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "vmd", 00:06:13.348 "config": [] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "accel", 00:06:13.348 "config": [ 00:06:13.348 { 00:06:13.348 "method": "accel_set_options", 00:06:13.348 "params": { 00:06:13.348 "small_cache_size": 128, 00:06:13.348 "large_cache_size": 16, 00:06:13.348 "task_count": 2048, 00:06:13.348 "sequence_count": 2048, 00:06:13.348 "buf_count": 2048 00:06:13.348 } 00:06:13.348 } 00:06:13.348 ] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "bdev", 00:06:13.348 "config": [ 00:06:13.348 { 00:06:13.348 "method": "bdev_set_options", 00:06:13.348 "params": { 00:06:13.348 "bdev_io_pool_size": 65535, 00:06:13.348 "bdev_io_cache_size": 256, 00:06:13.348 "bdev_auto_examine": true, 00:06:13.348 "iobuf_small_cache_size": 128, 00:06:13.348 "iobuf_large_cache_size": 16 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "bdev_raid_set_options", 00:06:13.348 "params": { 00:06:13.348 "process_window_size_kb": 1024, 00:06:13.348 "process_max_bandwidth_mb_sec": 0 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "bdev_iscsi_set_options", 00:06:13.348 "params": { 00:06:13.348 "timeout_sec": 30 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "bdev_nvme_set_options", 00:06:13.348 "params": { 00:06:13.348 "action_on_timeout": "none", 00:06:13.348 "timeout_us": 0, 00:06:13.348 "timeout_admin_us": 0, 00:06:13.348 "keep_alive_timeout_ms": 10000, 00:06:13.348 "arbitration_burst": 0, 00:06:13.348 "low_priority_weight": 0, 00:06:13.348 "medium_priority_weight": 0, 00:06:13.348 "high_priority_weight": 0, 00:06:13.348 "nvme_adminq_poll_period_us": 10000, 00:06:13.348 "nvme_ioq_poll_period_us": 0, 00:06:13.348 "io_queue_requests": 0, 00:06:13.348 "delay_cmd_submit": true, 00:06:13.348 "transport_retry_count": 4, 00:06:13.348 "bdev_retry_count": 3, 00:06:13.348 "transport_ack_timeout": 0, 00:06:13.348 "ctrlr_loss_timeout_sec": 0, 00:06:13.348 "reconnect_delay_sec": 0, 00:06:13.348 "fast_io_fail_timeout_sec": 0, 00:06:13.348 "disable_auto_failback": false, 00:06:13.348 "generate_uuids": false, 00:06:13.348 "transport_tos": 0, 00:06:13.348 "nvme_error_stat": false, 00:06:13.348 "rdma_srq_size": 0, 00:06:13.348 "io_path_stat": false, 00:06:13.348 "allow_accel_sequence": false, 00:06:13.348 "rdma_max_cq_size": 0, 00:06:13.348 "rdma_cm_event_timeout_ms": 0, 00:06:13.348 "dhchap_digests": [ 00:06:13.348 "sha256", 00:06:13.348 "sha384", 00:06:13.348 "sha512" 00:06:13.348 ], 00:06:13.348 "dhchap_dhgroups": [ 00:06:13.348 "null", 00:06:13.348 "ffdhe2048", 00:06:13.348 "ffdhe3072", 00:06:13.348 "ffdhe4096", 00:06:13.348 "ffdhe6144", 00:06:13.348 "ffdhe8192" 00:06:13.348 ] 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "bdev_nvme_set_hotplug", 00:06:13.348 "params": { 00:06:13.348 "period_us": 100000, 00:06:13.348 "enable": false 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "bdev_wait_for_examine" 00:06:13.348 } 00:06:13.348 ] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "scsi", 00:06:13.348 "config": null 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "scheduler", 00:06:13.348 "config": [ 00:06:13.348 { 00:06:13.348 "method": "framework_set_scheduler", 00:06:13.348 "params": { 00:06:13.348 "name": "static" 00:06:13.348 } 00:06:13.348 } 00:06:13.348 ] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "vhost_scsi", 00:06:13.348 "config": [] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "vhost_blk", 00:06:13.348 "config": [] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "ublk", 00:06:13.348 "config": [] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "nbd", 00:06:13.348 "config": [] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "nvmf", 00:06:13.348 "config": [ 00:06:13.348 { 00:06:13.348 "method": "nvmf_set_config", 00:06:13.348 "params": { 00:06:13.348 "discovery_filter": "match_any", 00:06:13.348 "admin_cmd_passthru": { 00:06:13.348 "identify_ctrlr": false 00:06:13.348 } 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "nvmf_set_max_subsystems", 00:06:13.348 "params": { 00:06:13.348 "max_subsystems": 1024 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "nvmf_set_crdt", 00:06:13.348 "params": { 00:06:13.348 "crdt1": 0, 00:06:13.348 "crdt2": 0, 00:06:13.348 "crdt3": 0 00:06:13.348 } 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "method": "nvmf_create_transport", 00:06:13.348 "params": { 00:06:13.348 "trtype": "TCP", 00:06:13.348 "max_queue_depth": 128, 00:06:13.348 "max_io_qpairs_per_ctrlr": 127, 00:06:13.348 "in_capsule_data_size": 4096, 00:06:13.348 "max_io_size": 131072, 00:06:13.348 "io_unit_size": 131072, 00:06:13.348 "max_aq_depth": 128, 00:06:13.348 "num_shared_buffers": 511, 00:06:13.348 "buf_cache_size": 4294967295, 00:06:13.348 "dif_insert_or_strip": false, 00:06:13.348 "zcopy": false, 00:06:13.348 "c2h_success": true, 00:06:13.348 "sock_priority": 0, 00:06:13.348 "abort_timeout_sec": 1, 00:06:13.348 "ack_timeout": 0, 00:06:13.348 "data_wr_pool_size": 0 00:06:13.348 } 00:06:13.348 } 00:06:13.348 ] 00:06:13.348 }, 00:06:13.348 { 00:06:13.348 "subsystem": "iscsi", 00:06:13.348 "config": [ 00:06:13.348 { 00:06:13.348 "method": "iscsi_set_options", 00:06:13.348 "params": { 00:06:13.348 "node_base": "iqn.2016-06.io.spdk", 00:06:13.348 "max_sessions": 128, 00:06:13.348 "max_connections_per_session": 2, 00:06:13.348 "max_queue_depth": 64, 00:06:13.348 "default_time2wait": 2, 00:06:13.348 "default_time2retain": 20, 00:06:13.348 "first_burst_length": 8192, 00:06:13.348 "immediate_data": true, 00:06:13.348 "allow_duplicated_isid": false, 00:06:13.348 "error_recovery_level": 0, 00:06:13.348 "nop_timeout": 60, 00:06:13.348 "nop_in_interval": 30, 00:06:13.348 "disable_chap": false, 00:06:13.348 "require_chap": false, 00:06:13.348 "mutual_chap": false, 00:06:13.349 "chap_group": 0, 00:06:13.349 "max_large_datain_per_connection": 64, 00:06:13.349 "max_r2t_per_connection": 4, 00:06:13.349 "pdu_pool_size": 36864, 00:06:13.349 "immediate_data_pool_size": 16384, 00:06:13.349 "data_out_pool_size": 2048 00:06:13.349 } 00:06:13.349 } 00:06:13.349 ] 00:06:13.349 } 00:06:13.349 ] 00:06:13.349 } 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4043627 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4043627 ']' 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4043627 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4043627 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4043627' 00:06:13.349 killing process with pid 4043627 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4043627 00:06:13.349 11:47:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4043627 00:06:13.914 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4043897 00:06:13.914 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:13.914 11:47:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4043897 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4043897 ']' 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4043897 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4043897 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4043897' 00:06:19.221 killing process with pid 4043897 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4043897 00:06:19.221 11:48:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4043897 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:19.221 00:06:19.221 real 0m6.937s 00:06:19.221 user 0m6.642s 00:06:19.221 sys 0m0.840s 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.221 ************************************ 00:06:19.221 END TEST skip_rpc_with_json 00:06:19.221 ************************************ 00:06:19.221 11:48:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.221 11:48:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.221 11:48:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.221 11:48:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.221 ************************************ 00:06:19.221 START TEST skip_rpc_with_delay 00:06:19.221 ************************************ 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.221 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.222 [2024-07-25 11:48:05.279537] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.222 [2024-07-25 11:48:05.279626] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.222 00:06:19.222 real 0m0.088s 00:06:19.222 user 0m0.056s 00:06:19.222 sys 0m0.032s 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.222 11:48:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.222 ************************************ 00:06:19.222 END TEST skip_rpc_with_delay 00:06:19.222 ************************************ 00:06:19.480 11:48:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.480 11:48:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.480 11:48:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.480 11:48:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.480 11:48:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.480 11:48:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.480 ************************************ 00:06:19.480 START TEST exit_on_failed_rpc_init 00:06:19.480 ************************************ 00:06:19.480 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:19.480 11:48:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4045360 00:06:19.480 11:48:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4045360 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 4045360 ']' 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.481 11:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.481 [2024-07-25 11:48:05.454677] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:19.481 [2024-07-25 11:48:05.454735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045360 ] 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:19.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.481 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:19.481 [2024-07-25 11:48:05.586134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.739 [2024-07-25 11:48:05.676675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:20.307 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:20.307 [2024-07-25 11:48:06.403351] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:20.307 [2024-07-25 11:48:06.403414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045562 ] 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:20.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.567 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:20.567 [2024-07-25 11:48:06.522542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.567 [2024-07-25 11:48:06.604794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.567 [2024-07-25 11:48:06.604872] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:20.567 [2024-07-25 11:48:06.604887] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:20.567 [2024-07-25 11:48:06.604899] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4045360 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 4045360 ']' 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 4045360 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4045360 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4045360' 00:06:20.827 killing process with pid 4045360 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 4045360 00:06:20.827 11:48:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 4045360 00:06:21.086 00:06:21.086 real 0m1.687s 00:06:21.086 user 0m1.935s 00:06:21.086 sys 0m0.567s 00:06:21.086 11:48:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.086 11:48:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.086 ************************************ 00:06:21.086 END TEST exit_on_failed_rpc_init 00:06:21.086 ************************************ 00:06:21.086 11:48:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:21.086 00:06:21.086 real 0m14.565s 00:06:21.086 user 0m13.854s 00:06:21.086 sys 0m2.120s 00:06:21.086 11:48:07 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.087 11:48:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 ************************************ 00:06:21.087 END TEST skip_rpc 00:06:21.087 ************************************ 00:06:21.087 11:48:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:21.087 11:48:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.087 11:48:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.087 11:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:21.345 ************************************ 00:06:21.345 START TEST rpc_client 00:06:21.345 ************************************ 00:06:21.345 11:48:07 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:21.345 * Looking for test storage... 00:06:21.345 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:06:21.345 11:48:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:21.345 OK 00:06:21.345 11:48:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:21.345 00:06:21.345 real 0m0.143s 00:06:21.345 user 0m0.059s 00:06:21.345 sys 0m0.094s 00:06:21.345 11:48:07 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.345 11:48:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:21.345 ************************************ 00:06:21.345 END TEST rpc_client 00:06:21.345 ************************************ 00:06:21.345 11:48:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:06:21.345 11:48:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.345 11:48:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.345 11:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:21.345 ************************************ 00:06:21.345 START TEST json_config 00:06:21.345 ************************************ 00:06:21.345 11:48:07 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:06:21.605 11:48:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.605 11:48:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:21.605 11:48:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.605 11:48:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.605 11:48:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.605 11:48:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.605 11:48:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.605 11:48:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.605 11:48:07 json_config -- paths/export.sh@5 -- # export PATH 00:06:21.606 11:48:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@47 -- # : 0 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.606 11:48:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:21.606 INFO: JSON configuration test init 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.606 11:48:07 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:21.606 11:48:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:21.606 11:48:07 json_config -- json_config/common.sh@10 -- # shift 00:06:21.606 11:48:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.606 11:48:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.606 11:48:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.606 11:48:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.606 11:48:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.606 11:48:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4045935 00:06:21.606 11:48:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.606 Waiting for target to run... 00:06:21.606 11:48:07 json_config -- json_config/common.sh@25 -- # waitforlisten 4045935 /var/tmp/spdk_tgt.sock 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 4045935 ']' 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.606 11:48:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.606 11:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.606 [2024-07-25 11:48:07.621167] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:21.606 [2024-07-25 11:48:07.621235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045935 ] 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.865 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:21.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:21.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:21.866 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:21.866 [2024-07-25 11:48:07.979308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.125 [2024-07-25 11:48:08.056178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.693 11:48:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.693 11:48:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:22.693 11:48:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.693 00:06:22.693 11:48:08 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:22.693 11:48:08 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:22.693 11:48:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.693 11:48:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.693 11:48:08 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:06:22.693 11:48:08 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:06:22.693 11:48:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:06:22.693 11:48:08 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:06:22.693 11:48:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:06:22.952 [2024-07-25 11:48:08.954835] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:22.952 11:48:08 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:06:22.952 11:48:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:06:23.212 [2024-07-25 11:48:09.179411] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:23.212 11:48:09 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:23.212 11:48:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.212 11:48:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 11:48:09 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:23.212 11:48:09 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:23.212 11:48:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:23.471 [2024-07-25 11:48:09.480452] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:28.761 11:48:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.761 11:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:28.761 11:48:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:28.761 11:48:14 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@51 -- # sort 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:28.762 11:48:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.762 11:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:06:28.762 11:48:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.762 11:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:06:28.762 11:48:14 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:28.762 11:48:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:29.021 11:48:14 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:06:29.021 11:48:14 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:29.021 11:48:14 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:29.021 11:48:15 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:06:29.021 11:48:15 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:06:29.021 11:48:15 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:29.021 11:48:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:29.280 Nvme0n1p0 Nvme0n1p1 00:06:29.280 11:48:15 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:29.280 11:48:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:29.849 [2024-07-25 11:48:15.703156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:29.849 [2024-07-25 11:48:15.703206] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:29.849 00:06:29.849 11:48:15 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:29.849 11:48:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:29.849 Malloc3 00:06:29.849 11:48:15 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:29.849 11:48:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:30.107 [2024-07-25 11:48:16.168460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:30.107 [2024-07-25 11:48:16.168503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.107 [2024-07-25 11:48:16.168525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x210bf00 00:06:30.107 [2024-07-25 11:48:16.168536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.107 [2024-07-25 11:48:16.169928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.107 [2024-07-25 11:48:16.169955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:30.107 PTBdevFromMalloc3 00:06:30.108 11:48:16 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:30.108 11:48:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:30.366 Null0 00:06:30.366 11:48:16 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:30.366 11:48:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:30.624 Malloc0 00:06:30.624 11:48:16 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:30.624 11:48:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:30.883 Malloc1 00:06:30.883 11:48:16 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:30.883 11:48:16 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:31.142 102400+0 records in 00:06:31.142 102400+0 records out 00:06:31.142 104857600 bytes (105 MB, 100 MiB) copied, 0.282823 s, 371 MB/s 00:06:31.142 11:48:17 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:31.142 11:48:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:31.400 aio_disk 00:06:31.400 11:48:17 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:31.400 11:48:17 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:31.401 11:48:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:35.649 197a28c7-cd66-42a3-b3e0-aa5744f34dd4 00:06:35.649 11:48:21 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:35.649 11:48:21 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:35.649 11:48:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:35.649 11:48:21 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:35.649 11:48:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:35.905 11:48:21 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:35.905 11:48:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:36.162 11:48:22 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:36.162 11:48:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:36.419 11:48:22 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:06:36.419 11:48:22 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:36.419 11:48:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:36.677 MallocForCryptoBdev 00:06:36.677 11:48:22 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:06:36.677 11:48:22 json_config -- json_config/json_config.sh@163 -- # wc -l 00:06:36.677 11:48:22 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:06:36.677 11:48:22 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:06:36.677 11:48:22 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:36.677 11:48:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:36.935 [2024-07-25 11:48:22.834426] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:36.935 CryptoMallocBdev 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:c6a54360-3cb1-48be-92c3-779416d3caba bdev_register:f57d2a86-7f7c-47fc-9c38-699a52fe131b bdev_register:95848a54-0cf5-485d-9bc6-43412771bbbd bdev_register:277b3ff4-de60-4c5a-9a08-18efd195c61f bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:c6a54360-3cb1-48be-92c3-779416d3caba bdev_register:f57d2a86-7f7c-47fc-9c38-699a52fe131b bdev_register:95848a54-0cf5-485d-9bc6-43412771bbbd bdev_register:277b3ff4-de60-4c5a-9a08-18efd195c61f bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@75 -- # sort 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@76 -- # sort 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:36.935 11:48:22 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:06:36.935 11:48:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.194 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:c6a54360-3cb1-48be-92c3-779416d3caba 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:f57d2a86-7f7c-47fc-9c38-699a52fe131b 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:95848a54-0cf5-485d-9bc6-43412771bbbd 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:277b3ff4-de60-4c5a-9a08-18efd195c61f 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:277b3ff4-de60-4c5a-9a08-18efd195c61f bdev_register:95848a54-0cf5-485d-9bc6-43412771bbbd bdev_register:aio_disk bdev_register:c6a54360-3cb1-48be-92c3-779416d3caba bdev_register:CryptoMallocBdev bdev_register:f57d2a86-7f7c-47fc-9c38-699a52fe131b bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\7\7\b\3\f\f\4\-\d\e\6\0\-\4\c\5\a\-\9\a\0\8\-\1\8\e\f\d\1\9\5\c\6\1\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\5\8\4\8\a\5\4\-\0\c\f\5\-\4\8\5\d\-\9\b\c\6\-\4\3\4\1\2\7\7\1\b\b\b\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\6\a\5\4\3\6\0\-\3\c\b\1\-\4\8\b\e\-\9\2\c\3\-\7\7\9\4\1\6\d\3\c\a\b\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\5\7\d\2\a\8\6\-\7\f\7\c\-\4\7\f\c\-\9\c\3\8\-\6\9\9\a\5\2\f\e\1\3\1\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@90 -- # cat 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:277b3ff4-de60-4c5a-9a08-18efd195c61f bdev_register:95848a54-0cf5-485d-9bc6-43412771bbbd bdev_register:aio_disk bdev_register:c6a54360-3cb1-48be-92c3-779416d3caba bdev_register:CryptoMallocBdev bdev_register:f57d2a86-7f7c-47fc-9c38-699a52fe131b bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:37.195 Expected events matched: 00:06:37.195 bdev_register:277b3ff4-de60-4c5a-9a08-18efd195c61f 00:06:37.195 bdev_register:95848a54-0cf5-485d-9bc6-43412771bbbd 00:06:37.195 bdev_register:aio_disk 00:06:37.195 bdev_register:c6a54360-3cb1-48be-92c3-779416d3caba 00:06:37.195 bdev_register:CryptoMallocBdev 00:06:37.195 bdev_register:f57d2a86-7f7c-47fc-9c38-699a52fe131b 00:06:37.195 bdev_register:Malloc0 00:06:37.195 bdev_register:Malloc0p0 00:06:37.195 bdev_register:Malloc0p1 00:06:37.195 bdev_register:Malloc0p2 00:06:37.195 bdev_register:Malloc1 00:06:37.195 bdev_register:Malloc3 00:06:37.195 bdev_register:MallocForCryptoBdev 00:06:37.195 bdev_register:Null0 00:06:37.195 bdev_register:Nvme0n1 00:06:37.195 bdev_register:Nvme0n1p0 00:06:37.195 bdev_register:Nvme0n1p1 00:06:37.195 bdev_register:PTBdevFromMalloc3 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:06:37.195 11:48:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.195 11:48:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:37.195 11:48:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.195 11:48:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:37.195 11:48:23 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:37.195 11:48:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:37.453 MallocBdevForConfigChangeCheck 00:06:37.453 11:48:23 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:37.453 11:48:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.453 11:48:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.453 11:48:23 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:37.453 11:48:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.711 11:48:23 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:37.711 INFO: shutting down applications... 00:06:37.711 11:48:23 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:37.711 11:48:23 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:37.711 11:48:23 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:37.711 11:48:23 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:37.969 [2024-07-25 11:48:24.010249] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:40.501 Calling clear_iscsi_subsystem 00:06:40.501 Calling clear_nvmf_subsystem 00:06:40.501 Calling clear_nbd_subsystem 00:06:40.501 Calling clear_ublk_subsystem 00:06:40.501 Calling clear_vhost_blk_subsystem 00:06:40.501 Calling clear_vhost_scsi_subsystem 00:06:40.501 Calling clear_bdev_subsystem 00:06:40.501 11:48:26 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:40.501 11:48:26 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:40.501 11:48:26 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:40.501 11:48:26 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:40.501 11:48:26 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:40.501 11:48:26 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:40.759 11:48:26 json_config -- json_config/json_config.sh@349 -- # break 00:06:40.759 11:48:26 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:40.759 11:48:26 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:40.759 11:48:26 json_config -- json_config/common.sh@31 -- # local app=target 00:06:40.759 11:48:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:40.759 11:48:26 json_config -- json_config/common.sh@35 -- # [[ -n 4045935 ]] 00:06:40.759 11:48:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4045935 00:06:40.759 11:48:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:40.760 11:48:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.760 11:48:26 json_config -- json_config/common.sh@41 -- # kill -0 4045935 00:06:40.760 11:48:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:41.325 11:48:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:41.325 11:48:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.325 11:48:27 json_config -- json_config/common.sh@41 -- # kill -0 4045935 00:06:41.325 11:48:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:41.325 11:48:27 json_config -- json_config/common.sh@43 -- # break 00:06:41.325 11:48:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:41.325 11:48:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:41.325 SPDK target shutdown done 00:06:41.325 11:48:27 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:41.325 INFO: relaunching applications... 00:06:41.325 11:48:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:41.325 11:48:27 json_config -- json_config/common.sh@9 -- # local app=target 00:06:41.325 11:48:27 json_config -- json_config/common.sh@10 -- # shift 00:06:41.325 11:48:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.325 11:48:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.325 11:48:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.325 11:48:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.325 11:48:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.325 11:48:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4049513 00:06:41.325 11:48:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.325 Waiting for target to run... 00:06:41.325 11:48:27 json_config -- json_config/common.sh@25 -- # waitforlisten 4049513 /var/tmp/spdk_tgt.sock 00:06:41.325 11:48:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:41.325 11:48:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 4049513 ']' 00:06:41.325 11:48:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.325 11:48:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.325 11:48:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.325 11:48:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.325 11:48:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.325 [2024-07-25 11:48:27.423530] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:41.325 [2024-07-25 11:48:27.423599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049513 ] 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.890 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:41.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:41.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.891 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:41.891 [2024-07-25 11:48:27.940661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.148 [2024-07-25 11:48:28.034277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.149 [2024-07-25 11:48:28.088330] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:42.149 [2024-07-25 11:48:28.096365] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:42.149 [2024-07-25 11:48:28.104382] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:42.149 [2024-07-25 11:48:28.185173] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:44.673 [2024-07-25 11:48:30.325835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:44.673 [2024-07-25 11:48:30.325886] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:44.673 [2024-07-25 11:48:30.325900] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:44.673 [2024-07-25 11:48:30.333854] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:44.673 [2024-07-25 11:48:30.333879] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:44.673 [2024-07-25 11:48:30.341868] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:44.673 [2024-07-25 11:48:30.341890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:44.673 [2024-07-25 11:48:30.349901] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:44.673 [2024-07-25 11:48:30.349927] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:44.673 [2024-07-25 11:48:30.349938] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:47.198 [2024-07-25 11:48:33.252745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:47.198 [2024-07-25 11:48:33.252786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.198 [2024-07-25 11:48:33.252801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d19f0 00:06:47.198 [2024-07-25 11:48:33.252812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.198 [2024-07-25 11:48:33.253075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.198 [2024-07-25 11:48:33.253091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:47.454 11:48:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.454 11:48:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:47.454 11:48:33 json_config -- json_config/common.sh@26 -- # echo '' 00:06:47.454 00:06:47.454 11:48:33 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:47.454 11:48:33 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:47.454 INFO: Checking if target configuration is the same... 00:06:47.454 11:48:33 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.454 11:48:33 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:47.454 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.454 + '[' 2 -ne 2 ']' 00:06:47.454 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:47.454 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:47.454 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:47.454 +++ basename /dev/fd/62 00:06:47.454 ++ mktemp /tmp/62.XXX 00:06:47.454 + tmp_file_1=/tmp/62.SxO 00:06:47.454 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.454 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:47.454 + tmp_file_2=/tmp/spdk_tgt_config.json.KoQ 00:06:47.454 + ret=0 00:06:47.454 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.018 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.018 + diff -u /tmp/62.SxO /tmp/spdk_tgt_config.json.KoQ 00:06:48.019 + echo 'INFO: JSON config files are the same' 00:06:48.019 INFO: JSON config files are the same 00:06:48.019 + rm /tmp/62.SxO /tmp/spdk_tgt_config.json.KoQ 00:06:48.019 + exit 0 00:06:48.019 11:48:33 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:48.019 11:48:33 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:48.019 INFO: changing configuration and checking if this can be detected... 00:06:48.019 11:48:33 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:48.019 11:48:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:48.277 11:48:34 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:48.277 11:48:34 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.277 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:48.277 + '[' 2 -ne 2 ']' 00:06:48.277 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:48.277 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:48.277 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:48.277 +++ basename /dev/fd/62 00:06:48.277 ++ mktemp /tmp/62.XXX 00:06:48.277 + tmp_file_1=/tmp/62.21M 00:06:48.277 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.277 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:48.277 + tmp_file_2=/tmp/spdk_tgt_config.json.giE 00:06:48.277 + ret=0 00:06:48.277 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.535 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.535 + diff -u /tmp/62.21M /tmp/spdk_tgt_config.json.giE 00:06:48.535 + ret=1 00:06:48.535 + echo '=== Start of file: /tmp/62.21M ===' 00:06:48.535 + cat /tmp/62.21M 00:06:48.535 + echo '=== End of file: /tmp/62.21M ===' 00:06:48.535 + echo '' 00:06:48.535 + echo '=== Start of file: /tmp/spdk_tgt_config.json.giE ===' 00:06:48.535 + cat /tmp/spdk_tgt_config.json.giE 00:06:48.535 + echo '=== End of file: /tmp/spdk_tgt_config.json.giE ===' 00:06:48.535 + echo '' 00:06:48.535 + rm /tmp/62.21M /tmp/spdk_tgt_config.json.giE 00:06:48.535 + exit 1 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:48.535 INFO: configuration change detected. 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:48.535 11:48:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.535 11:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@321 -- # [[ -n 4049513 ]] 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:48.535 11:48:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.535 11:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:06:48.535 11:48:34 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:48.535 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:48.794 11:48:34 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:48.794 11:48:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:49.052 11:48:35 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:49.052 11:48:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:49.310 11:48:35 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:49.310 11:48:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:49.569 11:48:35 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:49.569 11:48:35 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:49.569 11:48:35 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:49.569 11:48:35 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:49.569 11:48:35 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.569 11:48:35 json_config -- json_config/json_config.sh@327 -- # killprocess 4049513 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@950 -- # '[' -z 4049513 ']' 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@954 -- # kill -0 4049513 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@955 -- # uname 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4049513 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4049513' 00:06:49.569 killing process with pid 4049513 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@969 -- # kill 4049513 00:06:49.569 11:48:35 json_config -- common/autotest_common.sh@974 -- # wait 4049513 00:06:52.883 11:48:38 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.883 11:48:38 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:52.883 11:48:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.883 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.883 11:48:38 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:52.883 11:48:38 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:52.883 INFO: Success 00:06:52.883 00:06:52.883 real 0m30.841s 00:06:52.883 user 0m35.667s 00:06:52.883 sys 0m3.730s 00:06:52.883 11:48:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.883 11:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.883 ************************************ 00:06:52.883 END TEST json_config 00:06:52.883 ************************************ 00:06:52.883 11:48:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:52.883 11:48:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.883 11:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.883 11:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.883 ************************************ 00:06:52.883 START TEST json_config_extra_key 00:06:52.883 ************************************ 00:06:52.883 11:48:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:52.883 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:52.883 11:48:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.883 11:48:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.883 11:48:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.883 11:48:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.883 11:48:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.883 11:48:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.883 11:48:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:52.883 11:48:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.883 11:48:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.884 11:48:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:52.884 INFO: launching applications... 00:06:52.884 11:48:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4051495 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:52.884 Waiting for target to run... 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4051495 /var/tmp/spdk_tgt.sock 00:06:52.884 11:48:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 4051495 ']' 00:06:52.884 11:48:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:52.884 11:48:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:52.884 11:48:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.884 11:48:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:52.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:52.884 11:48:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.884 11:48:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:52.884 [2024-07-25 11:48:38.538592] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:52.884 [2024-07-25 11:48:38.538653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051495 ] 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:52.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.884 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:53.142 [2024-07-25 11:48:39.048618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.142 [2024-07-25 11:48:39.147761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.399 11:48:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.399 11:48:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:53.399 00:06:53.399 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:53.399 INFO: shutting down applications... 00:06:53.399 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4051495 ]] 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4051495 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4051495 00:06:53.399 11:48:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4051495 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:53.965 11:48:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:53.965 SPDK target shutdown done 00:06:53.965 11:48:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:53.965 Success 00:06:53.965 00:06:53.965 real 0m1.580s 00:06:53.965 user 0m1.101s 00:06:53.965 sys 0m0.609s 00:06:53.965 11:48:39 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.965 11:48:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.965 ************************************ 00:06:53.965 END TEST json_config_extra_key 00:06:53.965 ************************************ 00:06:53.965 11:48:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.965 11:48:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.965 11:48:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.965 11:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:53.965 ************************************ 00:06:53.965 START TEST alias_rpc 00:06:53.965 ************************************ 00:06:53.965 11:48:40 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:54.222 * Looking for test storage... 00:06:54.222 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:54.222 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.222 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4051812 00:06:54.222 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4051812 00:06:54.222 11:48:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 4051812 ']' 00:06:54.222 11:48:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.222 11:48:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.222 11:48:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.222 11:48:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.222 11:48:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.222 11:48:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.222 [2024-07-25 11:48:40.181479] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:54.222 [2024-07-25 11:48:40.181542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051812 ] 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.222 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:54.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:54.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.223 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:54.223 [2024-07-25 11:48:40.305456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.480 [2024-07-25 11:48:40.389284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.410 11:48:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.410 11:48:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:55.410 11:48:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:55.668 11:48:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4051812 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 4051812 ']' 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 4051812 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4051812 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4051812' 00:06:55.668 killing process with pid 4051812 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@969 -- # kill 4051812 00:06:55.668 11:48:41 alias_rpc -- common/autotest_common.sh@974 -- # wait 4051812 00:06:55.925 00:06:55.925 real 0m1.969s 00:06:55.925 user 0m2.375s 00:06:55.925 sys 0m0.567s 00:06:55.925 11:48:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.925 11:48:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 ************************************ 00:06:55.925 END TEST alias_rpc 00:06:55.925 ************************************ 00:06:55.925 11:48:42 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:55.925 11:48:42 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:55.925 11:48:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.925 11:48:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.925 11:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 ************************************ 00:06:56.183 START TEST spdkcli_tcp 00:06:56.183 ************************************ 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:56.183 * Looking for test storage... 00:06:56.183 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4052141 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4052141 00:06:56.183 11:48:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 4052141 ']' 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.183 11:48:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 [2024-07-25 11:48:42.255119] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:56.183 [2024-07-25 11:48:42.255185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052141 ] 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:56.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.442 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:56.442 [2024-07-25 11:48:42.389361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.442 [2024-07-25 11:48:42.473590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.442 [2024-07-25 11:48:42.473595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.376 11:48:43 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.376 11:48:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:57.376 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4052404 00:06:57.376 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:57.376 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:57.376 [ 00:06:57.376 "bdev_malloc_delete", 00:06:57.376 "bdev_malloc_create", 00:06:57.376 "bdev_null_resize", 00:06:57.376 "bdev_null_delete", 00:06:57.376 "bdev_null_create", 00:06:57.376 "bdev_nvme_cuse_unregister", 00:06:57.376 "bdev_nvme_cuse_register", 00:06:57.376 "bdev_opal_new_user", 00:06:57.376 "bdev_opal_set_lock_state", 00:06:57.376 "bdev_opal_delete", 00:06:57.376 "bdev_opal_get_info", 00:06:57.376 "bdev_opal_create", 00:06:57.376 "bdev_nvme_opal_revert", 00:06:57.376 "bdev_nvme_opal_init", 00:06:57.376 "bdev_nvme_send_cmd", 00:06:57.376 "bdev_nvme_get_path_iostat", 00:06:57.376 "bdev_nvme_get_mdns_discovery_info", 00:06:57.376 "bdev_nvme_stop_mdns_discovery", 00:06:57.376 "bdev_nvme_start_mdns_discovery", 00:06:57.376 "bdev_nvme_set_multipath_policy", 00:06:57.376 "bdev_nvme_set_preferred_path", 00:06:57.376 "bdev_nvme_get_io_paths", 00:06:57.376 "bdev_nvme_remove_error_injection", 00:06:57.376 "bdev_nvme_add_error_injection", 00:06:57.376 "bdev_nvme_get_discovery_info", 00:06:57.376 "bdev_nvme_stop_discovery", 00:06:57.376 "bdev_nvme_start_discovery", 00:06:57.376 "bdev_nvme_get_controller_health_info", 00:06:57.376 "bdev_nvme_disable_controller", 00:06:57.376 "bdev_nvme_enable_controller", 00:06:57.376 "bdev_nvme_reset_controller", 00:06:57.376 "bdev_nvme_get_transport_statistics", 00:06:57.376 "bdev_nvme_apply_firmware", 00:06:57.376 "bdev_nvme_detach_controller", 00:06:57.376 "bdev_nvme_get_controllers", 00:06:57.376 "bdev_nvme_attach_controller", 00:06:57.376 "bdev_nvme_set_hotplug", 00:06:57.376 "bdev_nvme_set_options", 00:06:57.376 "bdev_passthru_delete", 00:06:57.376 "bdev_passthru_create", 00:06:57.376 "bdev_lvol_set_parent_bdev", 00:06:57.376 "bdev_lvol_set_parent", 00:06:57.376 "bdev_lvol_check_shallow_copy", 00:06:57.376 "bdev_lvol_start_shallow_copy", 00:06:57.376 "bdev_lvol_grow_lvstore", 00:06:57.376 "bdev_lvol_get_lvols", 00:06:57.376 "bdev_lvol_get_lvstores", 00:06:57.376 "bdev_lvol_delete", 00:06:57.376 "bdev_lvol_set_read_only", 00:06:57.376 "bdev_lvol_resize", 00:06:57.376 "bdev_lvol_decouple_parent", 00:06:57.376 "bdev_lvol_inflate", 00:06:57.376 "bdev_lvol_rename", 00:06:57.376 "bdev_lvol_clone_bdev", 00:06:57.376 "bdev_lvol_clone", 00:06:57.376 "bdev_lvol_snapshot", 00:06:57.376 "bdev_lvol_create", 00:06:57.376 "bdev_lvol_delete_lvstore", 00:06:57.376 "bdev_lvol_rename_lvstore", 00:06:57.376 "bdev_lvol_create_lvstore", 00:06:57.376 "bdev_raid_set_options", 00:06:57.377 "bdev_raid_remove_base_bdev", 00:06:57.377 "bdev_raid_add_base_bdev", 00:06:57.377 "bdev_raid_delete", 00:06:57.377 "bdev_raid_create", 00:06:57.377 "bdev_raid_get_bdevs", 00:06:57.377 "bdev_error_inject_error", 00:06:57.377 "bdev_error_delete", 00:06:57.377 "bdev_error_create", 00:06:57.377 "bdev_split_delete", 00:06:57.377 "bdev_split_create", 00:06:57.377 "bdev_delay_delete", 00:06:57.377 "bdev_delay_create", 00:06:57.377 "bdev_delay_update_latency", 00:06:57.377 "bdev_zone_block_delete", 00:06:57.377 "bdev_zone_block_create", 00:06:57.377 "blobfs_create", 00:06:57.377 "blobfs_detect", 00:06:57.377 "blobfs_set_cache_size", 00:06:57.377 "bdev_crypto_delete", 00:06:57.377 "bdev_crypto_create", 00:06:57.377 "bdev_compress_delete", 00:06:57.377 "bdev_compress_create", 00:06:57.377 "bdev_compress_get_orphans", 00:06:57.377 "bdev_aio_delete", 00:06:57.377 "bdev_aio_rescan", 00:06:57.377 "bdev_aio_create", 00:06:57.377 "bdev_ftl_set_property", 00:06:57.377 "bdev_ftl_get_properties", 00:06:57.377 "bdev_ftl_get_stats", 00:06:57.377 "bdev_ftl_unmap", 00:06:57.377 "bdev_ftl_unload", 00:06:57.377 "bdev_ftl_delete", 00:06:57.377 "bdev_ftl_load", 00:06:57.377 "bdev_ftl_create", 00:06:57.377 "bdev_virtio_attach_controller", 00:06:57.377 "bdev_virtio_scsi_get_devices", 00:06:57.377 "bdev_virtio_detach_controller", 00:06:57.377 "bdev_virtio_blk_set_hotplug", 00:06:57.377 "bdev_iscsi_delete", 00:06:57.377 "bdev_iscsi_create", 00:06:57.377 "bdev_iscsi_set_options", 00:06:57.377 "accel_error_inject_error", 00:06:57.377 "ioat_scan_accel_module", 00:06:57.377 "dsa_scan_accel_module", 00:06:57.377 "iaa_scan_accel_module", 00:06:57.377 "dpdk_cryptodev_get_driver", 00:06:57.377 "dpdk_cryptodev_set_driver", 00:06:57.377 "dpdk_cryptodev_scan_accel_module", 00:06:57.377 "compressdev_scan_accel_module", 00:06:57.377 "keyring_file_remove_key", 00:06:57.377 "keyring_file_add_key", 00:06:57.377 "keyring_linux_set_options", 00:06:57.377 "iscsi_get_histogram", 00:06:57.377 "iscsi_enable_histogram", 00:06:57.377 "iscsi_set_options", 00:06:57.377 "iscsi_get_auth_groups", 00:06:57.377 "iscsi_auth_group_remove_secret", 00:06:57.377 "iscsi_auth_group_add_secret", 00:06:57.377 "iscsi_delete_auth_group", 00:06:57.377 "iscsi_create_auth_group", 00:06:57.377 "iscsi_set_discovery_auth", 00:06:57.377 "iscsi_get_options", 00:06:57.377 "iscsi_target_node_request_logout", 00:06:57.377 "iscsi_target_node_set_redirect", 00:06:57.377 "iscsi_target_node_set_auth", 00:06:57.377 "iscsi_target_node_add_lun", 00:06:57.377 "iscsi_get_stats", 00:06:57.377 "iscsi_get_connections", 00:06:57.377 "iscsi_portal_group_set_auth", 00:06:57.377 "iscsi_start_portal_group", 00:06:57.377 "iscsi_delete_portal_group", 00:06:57.377 "iscsi_create_portal_group", 00:06:57.377 "iscsi_get_portal_groups", 00:06:57.377 "iscsi_delete_target_node", 00:06:57.377 "iscsi_target_node_remove_pg_ig_maps", 00:06:57.377 "iscsi_target_node_add_pg_ig_maps", 00:06:57.377 "iscsi_create_target_node", 00:06:57.377 "iscsi_get_target_nodes", 00:06:57.377 "iscsi_delete_initiator_group", 00:06:57.377 "iscsi_initiator_group_remove_initiators", 00:06:57.377 "iscsi_initiator_group_add_initiators", 00:06:57.377 "iscsi_create_initiator_group", 00:06:57.377 "iscsi_get_initiator_groups", 00:06:57.377 "nvmf_set_crdt", 00:06:57.377 "nvmf_set_config", 00:06:57.377 "nvmf_set_max_subsystems", 00:06:57.377 "nvmf_stop_mdns_prr", 00:06:57.377 "nvmf_publish_mdns_prr", 00:06:57.377 "nvmf_subsystem_get_listeners", 00:06:57.377 "nvmf_subsystem_get_qpairs", 00:06:57.377 "nvmf_subsystem_get_controllers", 00:06:57.377 "nvmf_get_stats", 00:06:57.377 "nvmf_get_transports", 00:06:57.377 "nvmf_create_transport", 00:06:57.377 "nvmf_get_targets", 00:06:57.377 "nvmf_delete_target", 00:06:57.377 "nvmf_create_target", 00:06:57.377 "nvmf_subsystem_allow_any_host", 00:06:57.377 "nvmf_subsystem_remove_host", 00:06:57.377 "nvmf_subsystem_add_host", 00:06:57.377 "nvmf_ns_remove_host", 00:06:57.377 "nvmf_ns_add_host", 00:06:57.377 "nvmf_subsystem_remove_ns", 00:06:57.377 "nvmf_subsystem_add_ns", 00:06:57.377 "nvmf_subsystem_listener_set_ana_state", 00:06:57.377 "nvmf_discovery_get_referrals", 00:06:57.377 "nvmf_discovery_remove_referral", 00:06:57.377 "nvmf_discovery_add_referral", 00:06:57.377 "nvmf_subsystem_remove_listener", 00:06:57.377 "nvmf_subsystem_add_listener", 00:06:57.377 "nvmf_delete_subsystem", 00:06:57.377 "nvmf_create_subsystem", 00:06:57.377 "nvmf_get_subsystems", 00:06:57.377 "env_dpdk_get_mem_stats", 00:06:57.377 "nbd_get_disks", 00:06:57.377 "nbd_stop_disk", 00:06:57.377 "nbd_start_disk", 00:06:57.377 "ublk_recover_disk", 00:06:57.377 "ublk_get_disks", 00:06:57.377 "ublk_stop_disk", 00:06:57.377 "ublk_start_disk", 00:06:57.377 "ublk_destroy_target", 00:06:57.377 "ublk_create_target", 00:06:57.377 "virtio_blk_create_transport", 00:06:57.377 "virtio_blk_get_transports", 00:06:57.377 "vhost_controller_set_coalescing", 00:06:57.377 "vhost_get_controllers", 00:06:57.377 "vhost_delete_controller", 00:06:57.377 "vhost_create_blk_controller", 00:06:57.377 "vhost_scsi_controller_remove_target", 00:06:57.377 "vhost_scsi_controller_add_target", 00:06:57.377 "vhost_start_scsi_controller", 00:06:57.377 "vhost_create_scsi_controller", 00:06:57.377 "thread_set_cpumask", 00:06:57.377 "framework_get_governor", 00:06:57.377 "framework_get_scheduler", 00:06:57.377 "framework_set_scheduler", 00:06:57.377 "framework_get_reactors", 00:06:57.377 "thread_get_io_channels", 00:06:57.377 "thread_get_pollers", 00:06:57.377 "thread_get_stats", 00:06:57.377 "framework_monitor_context_switch", 00:06:57.377 "spdk_kill_instance", 00:06:57.377 "log_enable_timestamps", 00:06:57.377 "log_get_flags", 00:06:57.377 "log_clear_flag", 00:06:57.377 "log_set_flag", 00:06:57.377 "log_get_level", 00:06:57.377 "log_set_level", 00:06:57.377 "log_get_print_level", 00:06:57.377 "log_set_print_level", 00:06:57.377 "framework_enable_cpumask_locks", 00:06:57.377 "framework_disable_cpumask_locks", 00:06:57.377 "framework_wait_init", 00:06:57.377 "framework_start_init", 00:06:57.377 "scsi_get_devices", 00:06:57.377 "bdev_get_histogram", 00:06:57.377 "bdev_enable_histogram", 00:06:57.377 "bdev_set_qos_limit", 00:06:57.377 "bdev_set_qd_sampling_period", 00:06:57.377 "bdev_get_bdevs", 00:06:57.377 "bdev_reset_iostat", 00:06:57.377 "bdev_get_iostat", 00:06:57.377 "bdev_examine", 00:06:57.377 "bdev_wait_for_examine", 00:06:57.377 "bdev_set_options", 00:06:57.378 "notify_get_notifications", 00:06:57.378 "notify_get_types", 00:06:57.378 "accel_get_stats", 00:06:57.378 "accel_set_options", 00:06:57.378 "accel_set_driver", 00:06:57.378 "accel_crypto_key_destroy", 00:06:57.378 "accel_crypto_keys_get", 00:06:57.378 "accel_crypto_key_create", 00:06:57.378 "accel_assign_opc", 00:06:57.378 "accel_get_module_info", 00:06:57.378 "accel_get_opc_assignments", 00:06:57.378 "vmd_rescan", 00:06:57.378 "vmd_remove_device", 00:06:57.378 "vmd_enable", 00:06:57.378 "sock_get_default_impl", 00:06:57.378 "sock_set_default_impl", 00:06:57.378 "sock_impl_set_options", 00:06:57.378 "sock_impl_get_options", 00:06:57.378 "iobuf_get_stats", 00:06:57.378 "iobuf_set_options", 00:06:57.378 "framework_get_pci_devices", 00:06:57.378 "framework_get_config", 00:06:57.378 "framework_get_subsystems", 00:06:57.378 "trace_get_info", 00:06:57.378 "trace_get_tpoint_group_mask", 00:06:57.378 "trace_disable_tpoint_group", 00:06:57.378 "trace_enable_tpoint_group", 00:06:57.378 "trace_clear_tpoint_mask", 00:06:57.378 "trace_set_tpoint_mask", 00:06:57.378 "keyring_get_keys", 00:06:57.378 "spdk_get_version", 00:06:57.378 "rpc_get_methods" 00:06:57.378 ] 00:06:57.378 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.378 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:57.378 11:48:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4052141 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 4052141 ']' 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 4052141 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4052141 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4052141' 00:06:57.378 killing process with pid 4052141 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 4052141 00:06:57.378 11:48:43 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 4052141 00:06:57.944 00:06:57.944 real 0m1.743s 00:06:57.944 user 0m3.141s 00:06:57.944 sys 0m0.600s 00:06:57.944 11:48:43 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.944 11:48:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.944 ************************************ 00:06:57.944 END TEST spdkcli_tcp 00:06:57.944 ************************************ 00:06:57.944 11:48:43 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:57.944 11:48:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.944 11:48:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.944 11:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:57.944 ************************************ 00:06:57.944 START TEST dpdk_mem_utility 00:06:57.944 ************************************ 00:06:57.944 11:48:43 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:57.944 * Looking for test storage... 00:06:57.944 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:57.944 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:57.944 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4052496 00:06:57.944 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4052496 00:06:57.944 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:57.944 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 4052496 ']' 00:06:57.944 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.944 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.944 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.944 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.944 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.202 [2024-07-25 11:48:44.068606] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:58.202 [2024-07-25 11:48:44.068658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052496 ] 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:58.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.202 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:58.202 [2024-07-25 11:48:44.188837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.202 [2024-07-25 11:48:44.271952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.137 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.137 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:59.137 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:59.137 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:59.137 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.137 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.137 { 00:06:59.137 "filename": "/tmp/spdk_mem_dump.txt" 00:06:59.137 } 00:06:59.137 11:48:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.137 11:48:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:59.137 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:59.137 1 heaps totaling size 814.000000 MiB 00:06:59.137 size: 814.000000 MiB heap id: 0 00:06:59.137 end heaps---------- 00:06:59.137 8 mempools totaling size 598.116089 MiB 00:06:59.137 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:59.137 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:59.137 size: 84.521057 MiB name: bdev_io_4052496 00:06:59.137 size: 51.011292 MiB name: evtpool_4052496 00:06:59.137 size: 50.003479 MiB name: msgpool_4052496 00:06:59.137 size: 21.763794 MiB name: PDU_Pool 00:06:59.137 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:59.137 size: 0.026123 MiB name: Session_Pool 00:06:59.137 end mempools------- 00:06:59.137 201 memzones totaling size 4.176453 MiB 00:06:59.137 size: 1.000366 MiB name: RG_ring_0_4052496 00:06:59.137 size: 1.000366 MiB name: RG_ring_1_4052496 00:06:59.137 size: 1.000366 MiB name: RG_ring_4_4052496 00:06:59.137 size: 1.000366 MiB name: RG_ring_5_4052496 00:06:59.137 size: 0.125366 MiB name: RG_ring_2_4052496 00:06:59.137 size: 0.015991 MiB name: RG_ring_3_4052496 00:06:59.137 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:59.137 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:59.137 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:59.137 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:59.137 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:59.137 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:59.137 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:59.137 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:59.138 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:59.138 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:59.138 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:59.138 end memzones------- 00:06:59.138 11:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:59.138 heap id: 0 total size: 814.000000 MiB number of busy elements: 638 number of free elements: 14 00:06:59.138 list of free elements. size: 11.781189 MiB 00:06:59.138 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:59.138 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:59.138 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:59.138 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:59.138 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:59.138 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:59.138 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:59.138 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:59.138 element at address: 0x20001aa00000 with size: 0.564758 MiB 00:06:59.138 element at address: 0x200003a00000 with size: 0.494324 MiB 00:06:59.138 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:59.138 element at address: 0x200000800000 with size: 0.486511 MiB 00:06:59.138 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:59.138 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:59.138 list of standard malloc elements. size: 199.898804 MiB 00:06:59.138 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:59.138 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:59.138 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:59.138 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:59.138 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:59.138 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:59.138 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:59.138 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:59.138 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:06:59.138 element at address: 0x20000032f740 with size: 0.004395 MiB 00:06:59.138 element at address: 0x200000333200 with size: 0.004395 MiB 00:06:59.138 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:06:59.138 element at address: 0x20000033a780 with size: 0.004395 MiB 00:06:59.138 element at address: 0x20000033e240 with size: 0.004395 MiB 00:06:59.138 element at address: 0x200000341d00 with size: 0.004395 MiB 00:06:59.138 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000349280 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000350800 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000357d80 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000035b840 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000035f300 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000366880 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000036a340 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000036de00 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000375380 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000378e40 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000037c900 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000383e80 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000387940 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000038b400 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000392980 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000396440 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000399f00 with size: 0.004395 MiB 00:06:59.139 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:06:59.139 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:59.139 element at address: 0x200000329b80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000032d640 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000331100 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000332180 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000335c40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000338680 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000339700 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000033c140 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000340c80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000344740 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000347180 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000348200 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000034e700 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000034f780 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000353240 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000355c80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000356d00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000359740 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000035d200 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000035e280 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000361d40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000364780 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000365800 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000368240 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000370840 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000373280 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000374300 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000376d40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000037a800 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000037b880 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000037f340 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000381d80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000382e00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000385840 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000389300 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000038a380 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000038de40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000390880 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000391900 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000394340 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000397e00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000398e80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000039c940 with size: 0.004028 MiB 00:06:59.139 element at address: 0x20000039f380 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:59.139 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:59.139 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:59.139 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:59.139 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:59.139 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:59.139 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:59.139 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000200ec0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x2000002010c0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000205380 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000225640 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000225700 with size: 0.000183 MiB 00:06:59.139 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000225880 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000225940 with size: 0.000183 MiB 00:06:59.139 element at address: 0x200000225a00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225b80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225c40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225d00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225e80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000225f40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226000 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226180 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226240 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226300 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226500 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226680 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226740 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226800 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226980 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226a40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226b00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226c80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226d40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226e00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000226f80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000227040 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000227100 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000329300 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000329580 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000329640 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000329800 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000032d040 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000032d100 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000330940 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000330b00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000330d80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000334400 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000334680 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000334840 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000338080 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000338140 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000338300 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033b980 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033f440 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033f600 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000033f880 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000342f00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000343180 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000343340 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000346b80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000346c40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000346e00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034a480 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034a640 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034a700 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034df40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034e100 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000034e380 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000351a00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000351c80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000351e40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000355680 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000355740 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000355900 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000358f80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000359140 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000359200 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000360500 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000360780 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000360940 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000364180 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000364240 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000364400 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000367a80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000367c40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000367d00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036b540 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036b700 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036b980 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036f000 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036f280 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000036f440 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000372c80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000372d40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000372f00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000376580 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000376740 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000376800 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037a040 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037a200 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037a480 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037db00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000037df40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000381780 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000381840 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000381a00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000385080 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000385240 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000385300 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000388b40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000388d00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000388f80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000038c600 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000038c880 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000390280 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000390340 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000390500 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000393b80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000393d40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000393e00 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000397640 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000397800 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x200000397a80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039b100 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039b380 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:06:59.140 element at address: 0x20000039f000 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:06:59.140 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7e8c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:06:59.141 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90940 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:59.141 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:59.142 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:59.142 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:59.143 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:59.143 list of memzone associated elements. size: 602.320007 MiB 00:06:59.143 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:59.143 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:59.143 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:59.143 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:59.143 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:59.143 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4052496_0 00:06:59.143 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:59.143 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4052496_0 00:06:59.143 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:59.143 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4052496_0 00:06:59.143 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:59.143 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:59.143 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:59.143 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:59.143 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:59.143 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4052496 00:06:59.143 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:59.143 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4052496 00:06:59.143 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:06:59.143 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4052496 00:06:59.143 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:59.143 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:59.143 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:59.143 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:59.143 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:59.143 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:59.143 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:59.143 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:59.143 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:59.143 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4052496 00:06:59.143 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:59.143 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4052496 00:06:59.143 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:59.143 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4052496 00:06:59.143 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:59.143 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4052496 00:06:59.143 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:59.143 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4052496 00:06:59.143 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:59.143 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:59.143 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:59.143 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:59.143 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:59.143 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:59.143 element at address: 0x200000205440 with size: 0.125488 MiB 00:06:59.143 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4052496 00:06:59.143 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:59.143 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:59.143 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:59.143 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:59.143 element at address: 0x200000201180 with size: 0.016113 MiB 00:06:59.143 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4052496 00:06:59.143 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:59.143 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:59.143 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:06:59.143 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:59.143 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:59.143 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:59.143 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:59.143 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:59.143 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:59.143 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:59.143 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:59.143 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:59.143 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:59.143 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:59.143 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:59.143 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:59.143 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:59.143 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:59.143 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:59.143 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:59.143 element at address: 0x20000039b700 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:59.143 element at address: 0x200000397c40 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:59.143 element at address: 0x200000394180 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:59.143 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:59.143 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:59.143 element at address: 0x200000389140 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:59.143 element at address: 0x200000385680 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:59.143 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:59.143 element at address: 0x20000037e100 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:59.143 element at address: 0x20000037a640 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:59.143 element at address: 0x200000376b80 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:59.143 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:59.143 element at address: 0x20000036f600 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:59.143 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:59.143 element at address: 0x200000368080 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:59.143 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:59.143 element at address: 0x200000360b00 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:59.143 element at address: 0x20000035d040 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:59.143 element at address: 0x200000359580 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:59.143 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:59.143 element at address: 0x200000352000 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:59.143 element at address: 0x20000034e540 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:59.143 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:59.143 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:59.143 element at address: 0x200000343500 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:59.143 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:59.143 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:59.143 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:59.143 element at address: 0x200000334a00 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:59.143 element at address: 0x200000330f40 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:59.143 element at address: 0x20000032d480 with size: 0.000427 MiB 00:06:59.143 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:59.143 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:06:59.144 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:59.144 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:06:59.144 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:59.144 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:06:59.144 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4052496 00:06:59.144 element at address: 0x200000200f80 with size: 0.000305 MiB 00:06:59.144 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4052496 00:06:59.144 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:59.144 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:59.144 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:59.144 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:59.144 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:59.144 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:59.144 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:59.144 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:59.144 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:59.144 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:59.144 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:59.144 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:59.144 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:59.144 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:59.144 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:59.144 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:59.144 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:59.144 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:59.144 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:59.144 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:59.144 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:59.144 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:59.144 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:59.144 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:59.144 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:59.144 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:59.144 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:59.144 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:59.144 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:59.144 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:59.144 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:59.144 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:59.144 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:59.144 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:59.144 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:59.144 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:59.144 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:59.144 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:59.144 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:59.144 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:59.144 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:59.144 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:59.144 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:59.144 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:59.144 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:59.144 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:59.144 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:59.144 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:59.144 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:59.144 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:59.144 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:59.144 element at address: 0x20000039b440 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:59.144 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:59.144 element at address: 0x200000397b40 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:59.144 element at address: 0x200000397980 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:59.144 element at address: 0x200000397700 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:59.144 element at address: 0x200000394080 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:59.144 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:59.144 element at address: 0x200000393c40 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:59.144 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:59.144 element at address: 0x200000390400 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:59.144 element at address: 0x200000390180 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:59.144 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:59.144 element at address: 0x20000038c940 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:59.144 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:59.144 element at address: 0x200000389040 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:59.144 element at address: 0x200000388e80 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:59.144 element at address: 0x200000388c00 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:59.144 element at address: 0x200000385580 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:59.144 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:59.144 element at address: 0x200000385140 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:59.144 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:59.144 element at address: 0x200000381900 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:59.144 element at address: 0x200000381680 with size: 0.000244 MiB 00:06:59.144 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:59.144 element at address: 0x20000037e000 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:59.145 element at address: 0x20000037de40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:59.145 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:59.145 element at address: 0x20000037a540 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:59.145 element at address: 0x20000037a380 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:59.145 element at address: 0x20000037a100 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:59.145 element at address: 0x200000376a80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:59.145 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:59.145 element at address: 0x200000376640 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:59.145 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:59.145 element at address: 0x200000372e00 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:59.145 element at address: 0x200000372b80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:59.145 element at address: 0x20000036f500 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:59.145 element at address: 0x20000036f340 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:59.145 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:59.145 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:59.145 element at address: 0x20000036b880 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:59.145 element at address: 0x20000036b600 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:59.145 element at address: 0x200000367f80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:59.145 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:59.145 element at address: 0x200000367b40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:59.145 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:59.145 element at address: 0x200000364300 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:59.145 element at address: 0x200000364080 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:59.145 element at address: 0x200000360a00 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:59.145 element at address: 0x200000360840 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:59.145 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:06:59.145 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:59.145 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:59.145 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:06:59.145 element at address: 0x200000359480 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:59.145 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:59.145 element at address: 0x200000359040 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:06:59.145 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:59.145 element at address: 0x200000355800 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:59.145 element at address: 0x200000355580 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:06:59.145 element at address: 0x200000351f00 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:59.145 element at address: 0x200000351d40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:59.145 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:06:59.145 element at address: 0x20000034e440 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:59.145 element at address: 0x20000034e280 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:59.145 element at address: 0x20000034e000 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:06:59.145 element at address: 0x20000034a980 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:59.145 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:59.145 element at address: 0x20000034a540 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:06:59.145 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:59.145 element at address: 0x200000346d00 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:59.145 element at address: 0x200000346a80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:06:59.145 element at address: 0x200000343400 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:59.145 element at address: 0x200000343240 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:59.145 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:06:59.145 element at address: 0x20000033f940 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:59.145 element at address: 0x20000033f780 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:59.145 element at address: 0x20000033f500 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:06:59.145 element at address: 0x20000033be80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:59.145 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:59.145 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:06:59.145 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:59.145 element at address: 0x200000338200 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:59.145 element at address: 0x200000337f80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:06:59.145 element at address: 0x200000334900 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:59.145 element at address: 0x200000334740 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:59.145 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:06:59.145 element at address: 0x200000330e40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:59.145 element at address: 0x200000330c80 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:59.145 element at address: 0x200000330a00 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:06:59.145 element at address: 0x20000032d380 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:59.145 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:59.145 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:06:59.145 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:59.145 element at address: 0x200000329700 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:59.145 element at address: 0x200000329480 with size: 0.000244 MiB 00:06:59.145 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:06:59.145 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:06:59.145 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:59.145 11:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:59.145 11:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4052496 00:06:59.145 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 4052496 ']' 00:06:59.145 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 4052496 00:06:59.145 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:59.145 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.146 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4052496 00:06:59.404 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.404 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.404 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4052496' 00:06:59.404 killing process with pid 4052496 00:06:59.404 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 4052496 00:06:59.404 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 4052496 00:06:59.661 00:06:59.661 real 0m1.721s 00:06:59.661 user 0m1.917s 00:06:59.661 sys 0m0.539s 00:06:59.661 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.661 11:48:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.661 ************************************ 00:06:59.661 END TEST dpdk_mem_utility 00:06:59.661 ************************************ 00:06:59.661 11:48:45 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:59.661 11:48:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.661 11:48:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.661 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:59.661 ************************************ 00:06:59.661 START TEST event 00:06:59.661 ************************************ 00:06:59.661 11:48:45 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:59.919 * Looking for test storage... 00:06:59.919 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:59.919 11:48:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:59.919 11:48:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.919 11:48:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.919 11:48:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:59.919 11:48:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.919 11:48:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.919 ************************************ 00:06:59.919 START TEST event_perf 00:06:59.919 ************************************ 00:06:59.919 11:48:45 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.919 Running I/O for 1 seconds...[2024-07-25 11:48:45.875428] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:59.919 [2024-07-25 11:48:45.875487] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052972 ] 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.919 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:59.919 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.920 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:59.920 [2024-07-25 11:48:46.007710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.178 [2024-07-25 11:48:46.095449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.178 [2024-07-25 11:48:46.095544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.178 [2024-07-25 11:48:46.095629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.178 [2024-07-25 11:48:46.095632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.115 Running I/O for 1 seconds... 00:07:01.115 lcore 0: 184974 00:07:01.115 lcore 1: 184971 00:07:01.115 lcore 2: 184972 00:07:01.115 lcore 3: 184974 00:07:01.115 done. 00:07:01.115 00:07:01.115 real 0m1.327s 00:07:01.115 user 0m4.195s 00:07:01.115 sys 0m0.127s 00:07:01.115 11:48:47 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.115 11:48:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.115 ************************************ 00:07:01.115 END TEST event_perf 00:07:01.115 ************************************ 00:07:01.115 11:48:47 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.115 11:48:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:01.115 11:48:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.115 11:48:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.374 ************************************ 00:07:01.374 START TEST event_reactor 00:07:01.374 ************************************ 00:07:01.374 11:48:47 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.374 [2024-07-25 11:48:47.281794] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:01.374 [2024-07-25 11:48:47.281853] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053203 ] 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:01.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.374 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:01.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.375 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:01.375 [2024-07-25 11:48:47.413543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.633 [2024-07-25 11:48:47.496605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.570 test_start 00:07:02.570 oneshot 00:07:02.570 tick 100 00:07:02.570 tick 100 00:07:02.570 tick 250 00:07:02.570 tick 100 00:07:02.570 tick 100 00:07:02.570 tick 250 00:07:02.570 tick 100 00:07:02.570 tick 500 00:07:02.570 tick 100 00:07:02.570 tick 100 00:07:02.570 tick 250 00:07:02.570 tick 100 00:07:02.570 tick 100 00:07:02.570 test_end 00:07:02.570 00:07:02.570 real 0m1.317s 00:07:02.570 user 0m1.176s 00:07:02.570 sys 0m0.135s 00:07:02.570 11:48:48 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.570 11:48:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:02.570 ************************************ 00:07:02.570 END TEST event_reactor 00:07:02.570 ************************************ 00:07:02.570 11:48:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.570 11:48:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.570 11:48:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.570 11:48:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.570 ************************************ 00:07:02.570 START TEST event_reactor_perf 00:07:02.570 ************************************ 00:07:02.570 11:48:48 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.570 [2024-07-25 11:48:48.678305] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:02.570 [2024-07-25 11:48:48.678365] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053402 ] 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:02.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.830 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:02.830 [2024-07-25 11:48:48.809557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.830 [2024-07-25 11:48:48.892535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.208 test_start 00:07:04.208 test_end 00:07:04.208 Performance: 354141 events per second 00:07:04.208 00:07:04.208 real 0m1.315s 00:07:04.208 user 0m1.181s 00:07:04.208 sys 0m0.127s 00:07:04.208 11:48:49 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.208 11:48:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.208 ************************************ 00:07:04.208 END TEST event_reactor_perf 00:07:04.208 ************************************ 00:07:04.208 11:48:50 event -- event/event.sh@49 -- # uname -s 00:07:04.208 11:48:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.208 11:48:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.208 11:48:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.208 11:48:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.208 11:48:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.208 ************************************ 00:07:04.208 START TEST event_scheduler 00:07:04.208 ************************************ 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.208 * Looking for test storage... 00:07:04.208 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:07:04.208 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.208 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4053688 00:07:04.208 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.208 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.208 11:48:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4053688 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 4053688 ']' 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.208 11:48:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.208 [2024-07-25 11:48:50.215687] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:04.208 [2024-07-25 11:48:50.215752] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053688 ] 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.208 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:04.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:04.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.209 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:04.209 [2024-07-25 11:48:50.320289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.468 [2024-07-25 11:48:50.394853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.468 [2024-07-25 11:48:50.394938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.468 [2024-07-25 11:48:50.395023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.468 [2024-07-25 11:48:50.395026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:05.035 11:48:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.035 [2024-07-25 11:48:51.133759] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:05.035 [2024-07-25 11:48:51.133779] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.035 [2024-07-25 11:48:51.133790] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.035 [2024-07-25 11:48:51.133797] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.035 [2024-07-25 11:48:51.133804] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.035 11:48:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.035 11:48:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.294 [2024-07-25 11:48:51.221031] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.294 11:48:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.294 11:48:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.294 11:48:51 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.294 11:48:51 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.294 11:48:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.294 ************************************ 00:07:05.294 START TEST scheduler_create_thread 00:07:05.294 ************************************ 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 2 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 3 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 4 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 5 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 6 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 7 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 8 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 9 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 10 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.295 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.863 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.863 11:48:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:05.863 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.863 11:48:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.301 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.301 11:48:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:07.301 11:48:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:07.301 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.301 11:48:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.674 11:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.674 00:07:08.674 real 0m3.096s 00:07:08.674 user 0m0.025s 00:07:08.674 sys 0m0.006s 00:07:08.674 11:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.674 11:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.674 ************************************ 00:07:08.674 END TEST scheduler_create_thread 00:07:08.674 ************************************ 00:07:08.674 11:48:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:08.674 11:48:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4053688 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 4053688 ']' 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 4053688 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4053688 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4053688' 00:07:08.674 killing process with pid 4053688 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 4053688 00:07:08.674 11:48:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 4053688 00:07:08.674 [2024-07-25 11:48:54.736182] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:08.933 00:07:08.933 real 0m4.893s 00:07:08.933 user 0m9.601s 00:07:08.933 sys 0m0.473s 00:07:08.933 11:48:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.933 11:48:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:08.933 ************************************ 00:07:08.933 END TEST event_scheduler 00:07:08.933 ************************************ 00:07:08.933 11:48:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:08.933 11:48:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:08.933 11:48:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.933 11:48:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.933 11:48:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.933 ************************************ 00:07:08.933 START TEST app_repeat 00:07:08.933 ************************************ 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4054656 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4054656' 00:07:08.933 Process app_repeat pid: 4054656 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:08.933 spdk_app_start Round 0 00:07:08.933 11:48:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4054656 /var/tmp/spdk-nbd.sock 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4054656 ']' 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.933 11:48:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.192 [2024-07-25 11:48:55.080538] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:09.192 [2024-07-25 11:48:55.080604] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054656 ] 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.192 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:09.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:09.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.193 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:09.193 [2024-07-25 11:48:55.216302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.193 [2024-07-25 11:48:55.300007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.193 [2024-07-25 11:48:55.300012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.129 11:48:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.129 11:48:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:10.129 11:48:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.129 Malloc0 00:07:10.129 11:48:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.389 Malloc1 00:07:10.389 11:48:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:10.389 /dev/nbd0 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.389 1+0 records in 00:07:10.389 1+0 records out 00:07:10.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163294 s, 25.1 MB/s 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.389 11:48:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.389 11:48:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:10.649 /dev/nbd1 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.649 1+0 records in 00:07:10.649 1+0 records out 00:07:10.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259619 s, 15.8 MB/s 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.649 11:48:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.649 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.907 { 00:07:10.907 "nbd_device": "/dev/nbd0", 00:07:10.907 "bdev_name": "Malloc0" 00:07:10.907 }, 00:07:10.907 { 00:07:10.907 "nbd_device": "/dev/nbd1", 00:07:10.907 "bdev_name": "Malloc1" 00:07:10.907 } 00:07:10.907 ]' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.907 { 00:07:10.907 "nbd_device": "/dev/nbd0", 00:07:10.907 "bdev_name": "Malloc0" 00:07:10.907 }, 00:07:10.907 { 00:07:10.907 "nbd_device": "/dev/nbd1", 00:07:10.907 "bdev_name": "Malloc1" 00:07:10.907 } 00:07:10.907 ]' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.907 /dev/nbd1' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.907 /dev/nbd1' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.907 11:48:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.907 256+0 records in 00:07:10.907 256+0 records out 00:07:10.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108577 s, 96.6 MB/s 00:07:10.907 11:48:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.907 11:48:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.165 256+0 records in 00:07:11.165 256+0 records out 00:07:11.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167571 s, 62.6 MB/s 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.165 256+0 records in 00:07:11.165 256+0 records out 00:07:11.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183156 s, 57.3 MB/s 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.165 11:48:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.424 11:48:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.684 11:48:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.943 11:48:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.943 11:48:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:12.201 11:48:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.201 [2024-07-25 11:48:58.295165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.460 [2024-07-25 11:48:58.374097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.460 [2024-07-25 11:48:58.374102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.460 [2024-07-25 11:48:58.418667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.460 [2024-07-25 11:48:58.418714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.993 11:49:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.993 11:49:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:14.993 spdk_app_start Round 1 00:07:14.993 11:49:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4054656 /var/tmp/spdk-nbd.sock 00:07:14.993 11:49:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4054656 ']' 00:07:14.993 11:49:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.993 11:49:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.993 11:49:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.993 11:49:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.993 11:49:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.252 11:49:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.252 11:49:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:15.252 11:49:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.510 Malloc0 00:07:15.510 11:49:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.769 Malloc1 00:07:15.769 11:49:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.769 /dev/nbd0 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.769 1+0 records in 00:07:15.769 1+0 records out 00:07:15.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214963 s, 19.1 MB/s 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.769 11:49:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.769 11:49:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.029 /dev/nbd1 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.029 1+0 records in 00:07:16.029 1+0 records out 00:07:16.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237594 s, 17.2 MB/s 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.029 11:49:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.029 11:49:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.288 11:49:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.288 { 00:07:16.288 "nbd_device": "/dev/nbd0", 00:07:16.288 "bdev_name": "Malloc0" 00:07:16.288 }, 00:07:16.288 { 00:07:16.288 "nbd_device": "/dev/nbd1", 00:07:16.288 "bdev_name": "Malloc1" 00:07:16.288 } 00:07:16.288 ]' 00:07:16.288 11:49:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.288 { 00:07:16.288 "nbd_device": "/dev/nbd0", 00:07:16.288 "bdev_name": "Malloc0" 00:07:16.288 }, 00:07:16.288 { 00:07:16.288 "nbd_device": "/dev/nbd1", 00:07:16.288 "bdev_name": "Malloc1" 00:07:16.288 } 00:07:16.288 ]' 00:07:16.288 11:49:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.548 /dev/nbd1' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.548 /dev/nbd1' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.548 256+0 records in 00:07:16.548 256+0 records out 00:07:16.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106898 s, 98.1 MB/s 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.548 256+0 records in 00:07:16.548 256+0 records out 00:07:16.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165986 s, 63.2 MB/s 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.548 256+0 records in 00:07:16.548 256+0 records out 00:07:16.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182196 s, 57.6 MB/s 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.548 11:49:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.808 11:49:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.066 11:49:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.067 11:49:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.067 11:49:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.067 11:49:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.067 11:49:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.067 11:49:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.067 11:49:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.067 11:49:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.067 11:49:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.067 11:49:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.067 11:49:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.326 11:49:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.326 11:49:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.585 11:49:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.844 [2024-07-25 11:49:03.710115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.844 [2024-07-25 11:49:03.788124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.844 [2024-07-25 11:49:03.788129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.844 [2024-07-25 11:49:03.833583] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.844 [2024-07-25 11:49:03.833628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.375 11:49:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:20.375 11:49:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:20.375 spdk_app_start Round 2 00:07:20.375 11:49:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4054656 /var/tmp/spdk-nbd.sock 00:07:20.375 11:49:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4054656 ']' 00:07:20.375 11:49:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.375 11:49:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.375 11:49:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.375 11:49:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.375 11:49:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.634 11:49:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.634 11:49:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:20.634 11:49:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.893 Malloc0 00:07:20.893 11:49:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.893 Malloc1 00:07:20.893 11:49:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.893 11:49:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:21.153 /dev/nbd0 00:07:21.153 11:49:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.153 11:49:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.153 1+0 records in 00:07:21.153 1+0 records out 00:07:21.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212982 s, 19.2 MB/s 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.153 11:49:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.153 11:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.153 11:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.153 11:49:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:21.413 /dev/nbd1 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.413 1+0 records in 00:07:21.413 1+0 records out 00:07:21.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283505 s, 14.4 MB/s 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.413 11:49:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.413 11:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.716 { 00:07:21.716 "nbd_device": "/dev/nbd0", 00:07:21.716 "bdev_name": "Malloc0" 00:07:21.716 }, 00:07:21.716 { 00:07:21.716 "nbd_device": "/dev/nbd1", 00:07:21.716 "bdev_name": "Malloc1" 00:07:21.716 } 00:07:21.716 ]' 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.716 { 00:07:21.716 "nbd_device": "/dev/nbd0", 00:07:21.716 "bdev_name": "Malloc0" 00:07:21.716 }, 00:07:21.716 { 00:07:21.716 "nbd_device": "/dev/nbd1", 00:07:21.716 "bdev_name": "Malloc1" 00:07:21.716 } 00:07:21.716 ]' 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.716 /dev/nbd1' 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.716 /dev/nbd1' 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.716 11:49:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.717 11:49:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:21.717 256+0 records in 00:07:21.717 256+0 records out 00:07:21.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748316 s, 140 MB/s 00:07:21.717 11:49:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.717 11:49:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.717 256+0 records in 00:07:21.717 256+0 records out 00:07:21.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016731 s, 62.7 MB/s 00:07:21.717 11:49:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.717 11:49:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.998 256+0 records in 00:07:21.998 256+0 records out 00:07:21.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285297 s, 36.8 MB/s 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.998 11:49:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.998 11:49:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.256 11:49:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.257 11:49:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.257 11:49:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.257 11:49:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.257 11:49:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.515 11:49:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.515 11:49:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:22.775 11:49:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:23.033 [2024-07-25 11:49:09.098109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.292 [2024-07-25 11:49:09.176325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.292 [2024-07-25 11:49:09.176330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.292 [2024-07-25 11:49:09.220096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.292 [2024-07-25 11:49:09.220150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:25.825 11:49:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4054656 /var/tmp/spdk-nbd.sock 00:07:25.825 11:49:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4054656 ']' 00:07:25.825 11:49:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.825 11:49:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.825 11:49:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.825 11:49:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.825 11:49:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:26.084 11:49:12 event.app_repeat -- event/event.sh@39 -- # killprocess 4054656 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 4054656 ']' 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 4054656 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4054656 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4054656' 00:07:26.084 killing process with pid 4054656 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@969 -- # kill 4054656 00:07:26.084 11:49:12 event.app_repeat -- common/autotest_common.sh@974 -- # wait 4054656 00:07:26.343 spdk_app_start is called in Round 0. 00:07:26.343 Shutdown signal received, stop current app iteration 00:07:26.343 Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 reinitialization... 00:07:26.343 spdk_app_start is called in Round 1. 00:07:26.343 Shutdown signal received, stop current app iteration 00:07:26.343 Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 reinitialization... 00:07:26.343 spdk_app_start is called in Round 2. 00:07:26.343 Shutdown signal received, stop current app iteration 00:07:26.343 Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 reinitialization... 00:07:26.343 spdk_app_start is called in Round 3. 00:07:26.343 Shutdown signal received, stop current app iteration 00:07:26.343 11:49:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:26.343 11:49:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:26.343 00:07:26.343 real 0m17.247s 00:07:26.343 user 0m36.648s 00:07:26.343 sys 0m3.579s 00:07:26.343 11:49:12 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.343 11:49:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.343 ************************************ 00:07:26.343 END TEST app_repeat 00:07:26.343 ************************************ 00:07:26.343 11:49:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:26.343 00:07:26.343 real 0m26.622s 00:07:26.343 user 0m52.981s 00:07:26.343 sys 0m4.827s 00:07:26.343 11:49:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.343 11:49:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.343 ************************************ 00:07:26.343 END TEST event 00:07:26.343 ************************************ 00:07:26.343 11:49:12 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:07:26.343 11:49:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.343 11:49:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.343 11:49:12 -- common/autotest_common.sh@10 -- # set +x 00:07:26.343 ************************************ 00:07:26.343 START TEST thread 00:07:26.343 ************************************ 00:07:26.343 11:49:12 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:07:26.603 * Looking for test storage... 00:07:26.603 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:07:26.603 11:49:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:26.603 11:49:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:26.603 11:49:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.603 11:49:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.603 ************************************ 00:07:26.603 START TEST thread_poller_perf 00:07:26.603 ************************************ 00:07:26.603 11:49:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:26.603 [2024-07-25 11:49:12.581787] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:26.603 [2024-07-25 11:49:12.581858] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057922 ] 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:26.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.603 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:26.603 [2024-07-25 11:49:12.715404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.861 [2024-07-25 11:49:12.798726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.861 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:27.797 ====================================== 00:07:27.797 busy:2512136794 (cyc) 00:07:27.797 total_run_count: 290000 00:07:27.797 tsc_hz: 2500000000 (cyc) 00:07:27.797 ====================================== 00:07:27.797 poller_cost: 8662 (cyc), 3464 (nsec) 00:07:27.797 00:07:27.797 real 0m1.329s 00:07:27.797 user 0m1.194s 00:07:27.797 sys 0m0.128s 00:07:27.797 11:49:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.797 11:49:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.797 ************************************ 00:07:27.797 END TEST thread_poller_perf 00:07:27.797 ************************************ 00:07:28.056 11:49:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.056 11:49:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:28.056 11:49:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.056 11:49:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.056 ************************************ 00:07:28.056 START TEST thread_poller_perf 00:07:28.056 ************************************ 00:07:28.056 11:49:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.056 [2024-07-25 11:49:13.985566] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:28.056 [2024-07-25 11:49:13.985628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058190 ] 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:28.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.056 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:28.056 [2024-07-25 11:49:14.115450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.315 [2024-07-25 11:49:14.198250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.315 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:29.251 ====================================== 00:07:29.251 busy:2502390322 (cyc) 00:07:29.251 total_run_count: 3821000 00:07:29.251 tsc_hz: 2500000000 (cyc) 00:07:29.251 ====================================== 00:07:29.251 poller_cost: 654 (cyc), 261 (nsec) 00:07:29.251 00:07:29.251 real 0m1.315s 00:07:29.251 user 0m1.181s 00:07:29.251 sys 0m0.129s 00:07:29.251 11:49:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.251 11:49:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.251 ************************************ 00:07:29.251 END TEST thread_poller_perf 00:07:29.251 ************************************ 00:07:29.251 11:49:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:29.251 00:07:29.251 real 0m2.905s 00:07:29.251 user 0m2.465s 00:07:29.251 sys 0m0.446s 00:07:29.251 11:49:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.251 11:49:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.251 ************************************ 00:07:29.251 END TEST thread 00:07:29.251 ************************************ 00:07:29.251 11:49:15 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:07:29.251 11:49:15 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:07:29.251 11:49:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.251 11:49:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.251 11:49:15 -- common/autotest_common.sh@10 -- # set +x 00:07:29.510 ************************************ 00:07:29.510 START TEST accel 00:07:29.510 ************************************ 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:07:29.510 * Looking for test storage... 00:07:29.510 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:29.510 11:49:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:29.510 11:49:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:29.510 11:49:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.510 11:49:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4058518 00:07:29.510 11:49:15 accel -- accel/accel.sh@63 -- # waitforlisten 4058518 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@831 -- # '[' -z 4058518 ']' 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.510 11:49:15 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.510 11:49:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.510 11:49:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.510 11:49:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.510 11:49:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.510 11:49:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.510 11:49:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.510 11:49:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.510 11:49:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:29.510 11:49:15 accel -- accel/accel.sh@41 -- # jq -r . 00:07:29.510 [2024-07-25 11:49:15.566885] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:29.510 [2024-07-25 11:49:15.566947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058518 ] 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:29.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.770 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:29.770 [2024-07-25 11:49:15.701003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.770 [2024-07-25 11:49:15.785551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@864 -- # return 0 00:07:30.338 11:49:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:30.338 11:49:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:30.338 11:49:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:30.338 11:49:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:30.338 11:49:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:30.338 11:49:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:30.338 11:49:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:30.338 11:49:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:30.338 11:49:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:30.338 11:49:16 accel -- accel/accel.sh@75 -- # killprocess 4058518 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@950 -- # '[' -z 4058518 ']' 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@954 -- # kill -0 4058518 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@955 -- # uname 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.338 11:49:16 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4058518 00:07:30.597 11:49:16 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.597 11:49:16 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.597 11:49:16 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4058518' 00:07:30.597 killing process with pid 4058518 00:07:30.597 11:49:16 accel -- common/autotest_common.sh@969 -- # kill 4058518 00:07:30.597 11:49:16 accel -- common/autotest_common.sh@974 -- # wait 4058518 00:07:30.856 11:49:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:30.856 11:49:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:30.856 11:49:16 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.856 11:49:16 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.856 11:49:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.856 11:49:16 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:07:30.856 11:49:16 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:30.856 11:49:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:30.856 11:49:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.856 11:49:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.856 11:49:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.857 11:49:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.857 11:49:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.857 11:49:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:30.857 11:49:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:30.857 11:49:16 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.857 11:49:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:30.857 11:49:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:30.857 11:49:16 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:30.857 11:49:16 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.857 11:49:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.857 ************************************ 00:07:30.857 START TEST accel_missing_filename 00:07:30.857 ************************************ 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.857 11:49:16 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:31.116 11:49:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:31.116 [2024-07-25 11:49:17.005145] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:31.116 [2024-07-25 11:49:17.005204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058783 ] 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:31.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.116 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:31.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.117 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:31.117 [2024-07-25 11:49:17.136420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.117 [2024-07-25 11:49:17.218974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.376 [2024-07-25 11:49:17.273958] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.376 [2024-07-25 11:49:17.336733] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:31.376 A filename is required. 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.376 00:07:31.376 real 0m0.448s 00:07:31.376 user 0m0.286s 00:07:31.376 sys 0m0.192s 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.376 11:49:17 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:31.376 ************************************ 00:07:31.376 END TEST accel_missing_filename 00:07:31.376 ************************************ 00:07:31.376 11:49:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.376 11:49:17 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:31.376 11:49:17 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.376 11:49:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.635 ************************************ 00:07:31.635 START TEST accel_compress_verify 00:07:31.635 ************************************ 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.635 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:31.635 11:49:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:31.635 [2024-07-25 11:49:17.532945] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:31.635 [2024-07-25 11:49:17.533004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058863 ] 00:07:31.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.635 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:31.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.635 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:31.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:31.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.636 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:31.636 [2024-07-25 11:49:17.667572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.636 [2024-07-25 11:49:17.746100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.895 [2024-07-25 11:49:17.810275] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.895 [2024-07-25 11:49:17.874896] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:31.895 00:07:31.895 Compression does not support the verify option, aborting. 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.895 00:07:31.895 real 0m0.458s 00:07:31.895 user 0m0.301s 00:07:31.895 sys 0m0.184s 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.895 11:49:17 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:31.895 ************************************ 00:07:31.895 END TEST accel_compress_verify 00:07:31.895 ************************************ 00:07:31.895 11:49:17 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:31.895 11:49:17 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:31.895 11:49:17 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.895 11:49:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.154 ************************************ 00:07:32.154 START TEST accel_wrong_workload 00:07:32.154 ************************************ 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.154 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:32.154 11:49:18 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:32.154 Unsupported workload type: foobar 00:07:32.154 [2024-07-25 11:49:18.067400] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:32.154 accel_perf options: 00:07:32.154 [-h help message] 00:07:32.154 [-q queue depth per core] 00:07:32.154 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:32.154 [-T number of threads per core 00:07:32.154 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:32.154 [-t time in seconds] 00:07:32.154 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:32.154 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:32.154 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:32.154 [-l for compress/decompress workloads, name of uncompressed input file 00:07:32.154 [-S for crc32c workload, use this seed value (default 0) 00:07:32.155 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:32.155 [-f for fill workload, use this BYTE value (default 255) 00:07:32.155 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:32.155 [-y verify result if this switch is on] 00:07:32.155 [-a tasks to allocate per core (default: same value as -q)] 00:07:32.155 Can be used to spread operations across a wider range of memory. 00:07:32.155 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:07:32.155 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.155 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.155 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.155 00:07:32.155 real 0m0.041s 00:07:32.155 user 0m0.023s 00:07:32.155 sys 0m0.018s 00:07:32.155 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.155 11:49:18 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:32.155 ************************************ 00:07:32.155 END TEST accel_wrong_workload 00:07:32.155 ************************************ 00:07:32.155 Error: writing output failed: Broken pipe 00:07:32.155 11:49:18 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:32.155 11:49:18 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:32.155 11:49:18 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.155 11:49:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.155 ************************************ 00:07:32.155 START TEST accel_negative_buffers 00:07:32.155 ************************************ 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:32.155 11:49:18 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:32.155 -x option must be non-negative. 00:07:32.155 [2024-07-25 11:49:18.189147] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:32.155 accel_perf options: 00:07:32.155 [-h help message] 00:07:32.155 [-q queue depth per core] 00:07:32.155 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:32.155 [-T number of threads per core 00:07:32.155 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:32.155 [-t time in seconds] 00:07:32.155 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:32.155 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:32.155 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:32.155 [-l for compress/decompress workloads, name of uncompressed input file 00:07:32.155 [-S for crc32c workload, use this seed value (default 0) 00:07:32.155 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:32.155 [-f for fill workload, use this BYTE value (default 255) 00:07:32.155 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:32.155 [-y verify result if this switch is on] 00:07:32.155 [-a tasks to allocate per core (default: same value as -q)] 00:07:32.155 Can be used to spread operations across a wider range of memory. 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.155 00:07:32.155 real 0m0.041s 00:07:32.155 user 0m0.023s 00:07:32.155 sys 0m0.019s 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.155 11:49:18 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:32.155 ************************************ 00:07:32.155 END TEST accel_negative_buffers 00:07:32.155 ************************************ 00:07:32.155 Error: writing output failed: Broken pipe 00:07:32.155 11:49:18 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:32.155 11:49:18 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:32.155 11:49:18 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.155 11:49:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.415 ************************************ 00:07:32.415 START TEST accel_crc32c 00:07:32.415 ************************************ 00:07:32.415 11:49:18 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:32.415 11:49:18 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:32.415 [2024-07-25 11:49:18.307548] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:32.415 [2024-07-25 11:49:18.307609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058993 ] 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:32.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.415 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:32.415 [2024-07-25 11:49:18.441108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.415 [2024-07-25 11:49:18.527037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.674 11:49:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.675 11:49:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.675 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.675 11:49:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:34.054 11:49:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.054 00:07:34.054 real 0m1.460s 00:07:34.054 user 0m0.010s 00:07:34.054 sys 0m0.003s 00:07:34.054 11:49:19 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.054 11:49:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:34.054 ************************************ 00:07:34.054 END TEST accel_crc32c 00:07:34.054 ************************************ 00:07:34.054 11:49:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:34.054 11:49:19 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:34.054 11:49:19 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.054 11:49:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.054 ************************************ 00:07:34.054 START TEST accel_crc32c_C2 00:07:34.054 ************************************ 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.054 11:49:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:34.054 [2024-07-25 11:49:19.845583] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:34.054 [2024-07-25 11:49:19.845637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059265 ] 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:34.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.054 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:34.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.055 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:34.055 [2024-07-25 11:49:19.974660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.055 [2024-07-25 11:49:20.064769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.055 11:49:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.434 00:07:35.434 real 0m1.461s 00:07:35.434 user 0m0.009s 00:07:35.434 sys 0m0.002s 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.434 11:49:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:35.434 ************************************ 00:07:35.434 END TEST accel_crc32c_C2 00:07:35.434 ************************************ 00:07:35.434 11:49:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:35.434 11:49:21 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:35.434 11:49:21 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.434 11:49:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.434 ************************************ 00:07:35.434 START TEST accel_copy 00:07:35.435 ************************************ 00:07:35.435 11:49:21 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:35.435 11:49:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:35.435 [2024-07-25 11:49:21.377627] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:35.435 [2024-07-25 11:49:21.377685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059531 ] 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:35.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.435 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:35.435 [2024-07-25 11:49:21.510629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.695 [2024-07-25 11:49:21.593026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.695 11:49:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:37.106 11:49:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.106 00:07:37.106 real 0m1.463s 00:07:37.106 user 0m0.009s 00:07:37.106 sys 0m0.001s 00:07:37.106 11:49:22 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.106 11:49:22 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.106 ************************************ 00:07:37.106 END TEST accel_copy 00:07:37.106 ************************************ 00:07:37.106 11:49:22 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:37.106 11:49:22 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:37.106 11:49:22 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.106 11:49:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.106 ************************************ 00:07:37.106 START TEST accel_fill 00:07:37.106 ************************************ 00:07:37.106 11:49:22 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:37.106 11:49:22 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:37.106 [2024-07-25 11:49:22.922907] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:37.106 [2024-07-25 11:49:22.922962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059809 ] 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.106 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:37.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.107 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:37.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.107 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:37.107 [2024-07-25 11:49:23.051542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.107 [2024-07-25 11:49:23.134299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:37.107 11:49:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:38.484 11:49:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.484 00:07:38.484 real 0m1.460s 00:07:38.484 user 0m0.008s 00:07:38.484 sys 0m0.003s 00:07:38.484 11:49:24 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.484 11:49:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:38.484 ************************************ 00:07:38.484 END TEST accel_fill 00:07:38.484 ************************************ 00:07:38.484 11:49:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:38.484 11:49:24 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:38.484 11:49:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.484 11:49:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.484 ************************************ 00:07:38.484 START TEST accel_copy_crc32c 00:07:38.484 ************************************ 00:07:38.484 11:49:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:07:38.484 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:38.484 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:38.484 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.484 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.484 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:38.485 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:38.485 [2024-07-25 11:49:24.458665] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:38.485 [2024-07-25 11:49:24.458718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060081 ] 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:38.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.485 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:38.485 [2024-07-25 11:49:24.587214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.745 [2024-07-25 11:49:24.670614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.745 11:49:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.125 00:07:40.125 real 0m1.461s 00:07:40.125 user 0m0.011s 00:07:40.125 sys 0m0.001s 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.125 11:49:25 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:40.125 ************************************ 00:07:40.125 END TEST accel_copy_crc32c 00:07:40.125 ************************************ 00:07:40.125 11:49:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:40.125 11:49:25 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:40.125 11:49:25 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.125 11:49:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.125 ************************************ 00:07:40.125 START TEST accel_copy_crc32c_C2 00:07:40.125 ************************************ 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:40.125 11:49:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:40.125 [2024-07-25 11:49:25.996971] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:40.125 [2024-07-25 11:49:25.997027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060363 ] 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.125 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.126 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.126 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.126 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:40.126 [2024-07-25 11:49:26.128665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.126 [2024-07-25 11:49:26.212493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.385 11:49:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.322 00:07:41.322 real 0m1.464s 00:07:41.322 user 0m0.012s 00:07:41.322 sys 0m0.002s 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.322 11:49:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:41.322 ************************************ 00:07:41.322 END TEST accel_copy_crc32c_C2 00:07:41.322 ************************************ 00:07:41.582 11:49:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:41.582 11:49:27 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:41.582 11:49:27 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.582 11:49:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.582 ************************************ 00:07:41.582 START TEST accel_dualcast 00:07:41.582 ************************************ 00:07:41.582 11:49:27 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:41.582 11:49:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:41.582 [2024-07-25 11:49:27.533291] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:41.582 [2024-07-25 11:49:27.533348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060637 ] 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:41.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.582 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:41.582 [2024-07-25 11:49:27.667370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.841 [2024-07-25 11:49:27.748852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.841 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.841 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.842 11:49:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:43.220 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:43.221 11:49:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.221 00:07:43.221 real 0m1.456s 00:07:43.221 user 0m0.009s 00:07:43.221 sys 0m0.003s 00:07:43.221 11:49:28 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.221 11:49:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:43.221 ************************************ 00:07:43.221 END TEST accel_dualcast 00:07:43.221 ************************************ 00:07:43.221 11:49:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:43.221 11:49:28 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:43.221 11:49:28 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.221 11:49:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.221 ************************************ 00:07:43.221 START TEST accel_compare 00:07:43.221 ************************************ 00:07:43.221 11:49:29 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:43.221 11:49:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:43.221 [2024-07-25 11:49:29.066440] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:43.221 [2024-07-25 11:49:29.066496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060920 ] 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:43.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.221 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:43.221 [2024-07-25 11:49:29.197154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.221 [2024-07-25 11:49:29.280474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.481 11:49:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:44.417 11:49:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.417 00:07:44.417 real 0m1.459s 00:07:44.417 user 0m0.008s 00:07:44.417 sys 0m0.003s 00:07:44.417 11:49:30 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.417 11:49:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:44.417 ************************************ 00:07:44.417 END TEST accel_compare 00:07:44.417 ************************************ 00:07:44.417 11:49:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:44.417 11:49:30 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:44.417 11:49:30 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.417 11:49:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.676 ************************************ 00:07:44.676 START TEST accel_xor 00:07:44.676 ************************************ 00:07:44.676 11:49:30 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:44.676 11:49:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:44.676 [2024-07-25 11:49:30.580302] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:44.676 [2024-07-25 11:49:30.580343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061200 ] 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.676 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.676 [2024-07-25 11:49:30.695593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.677 [2024-07-25 11:49:30.778675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 11:49:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:45.871 11:49:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.871 00:07:45.871 real 0m1.418s 00:07:45.871 user 0m0.010s 00:07:45.871 sys 0m0.001s 00:07:45.871 11:49:31 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.871 11:49:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:45.871 ************************************ 00:07:45.871 END TEST accel_xor 00:07:45.871 ************************************ 00:07:46.130 11:49:32 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:46.130 11:49:32 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:46.130 11:49:32 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.130 11:49:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.130 ************************************ 00:07:46.130 START TEST accel_xor 00:07:46.130 ************************************ 00:07:46.130 11:49:32 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:46.130 11:49:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:46.130 [2024-07-25 11:49:32.094324] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:46.130 [2024-07-25 11:49:32.094384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061480 ] 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:46.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.130 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:46.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.131 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:46.131 [2024-07-25 11:49:32.229974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.391 [2024-07-25 11:49:32.311552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 11:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:47.770 11:49:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.770 00:07:47.770 real 0m1.465s 00:07:47.770 user 0m0.012s 00:07:47.770 sys 0m0.001s 00:07:47.770 11:49:33 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.770 11:49:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:47.770 ************************************ 00:07:47.770 END TEST accel_xor 00:07:47.770 ************************************ 00:07:47.770 11:49:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:47.770 11:49:33 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:47.770 11:49:33 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.770 11:49:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.770 ************************************ 00:07:47.770 START TEST accel_dif_verify 00:07:47.770 ************************************ 00:07:47.770 11:49:33 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:47.770 11:49:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:47.770 [2024-07-25 11:49:33.631883] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:47.770 [2024-07-25 11:49:33.631937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061765 ] 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:47.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.770 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:47.770 [2024-07-25 11:49:33.761982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.770 [2024-07-25 11:49:33.845324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:48.029 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 11:49:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:48.966 11:49:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.966 00:07:48.966 real 0m1.469s 00:07:48.966 user 0m0.012s 00:07:48.966 sys 0m0.001s 00:07:48.966 11:49:35 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.966 11:49:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:48.966 ************************************ 00:07:48.966 END TEST accel_dif_verify 00:07:48.966 ************************************ 00:07:49.225 11:49:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:49.225 11:49:35 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:49.225 11:49:35 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.225 11:49:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.225 ************************************ 00:07:49.225 START TEST accel_dif_generate 00:07:49.225 ************************************ 00:07:49.225 11:49:35 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:49.225 11:49:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:49.225 [2024-07-25 11:49:35.180401] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:49.225 [2024-07-25 11:49:35.180457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062051 ] 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:49.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.225 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:49.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.226 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:49.226 [2024-07-25 11:49:35.310955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.485 [2024-07-25 11:49:35.395176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.485 11:49:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:50.864 11:49:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.864 00:07:50.864 real 0m1.463s 00:07:50.864 user 0m0.010s 00:07:50.864 sys 0m0.003s 00:07:50.864 11:49:36 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.864 11:49:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:50.864 ************************************ 00:07:50.864 END TEST accel_dif_generate 00:07:50.864 ************************************ 00:07:50.864 11:49:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:50.864 11:49:36 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:50.864 11:49:36 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.864 11:49:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.864 ************************************ 00:07:50.864 START TEST accel_dif_generate_copy 00:07:50.864 ************************************ 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:50.864 11:49:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:50.864 [2024-07-25 11:49:36.719290] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:50.864 [2024-07-25 11:49:36.719348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062328 ] 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:50.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.864 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:50.865 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.865 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:50.865 [2024-07-25 11:49:36.851453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.865 [2024-07-25 11:49:36.935307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.124 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.125 11:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.101 00:07:52.101 real 0m1.469s 00:07:52.101 user 0m0.010s 00:07:52.101 sys 0m0.001s 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.101 11:49:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.101 ************************************ 00:07:52.101 END TEST accel_dif_generate_copy 00:07:52.101 ************************************ 00:07:52.101 11:49:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:52.101 11:49:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:52.101 11:49:38 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:52.101 11:49:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.101 11:49:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.361 ************************************ 00:07:52.361 START TEST accel_comp 00:07:52.361 ************************************ 00:07:52.361 11:49:38 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:52.361 11:49:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:52.361 [2024-07-25 11:49:38.257527] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:52.361 [2024-07-25 11:49:38.257584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062618 ] 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:52.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.361 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:52.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.362 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:52.362 [2024-07-25 11:49:38.389773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.362 [2024-07-25 11:49:38.472459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.621 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:52.622 11:49:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:54.002 11:49:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.002 00:07:54.002 real 0m1.460s 00:07:54.002 user 0m0.008s 00:07:54.002 sys 0m0.004s 00:07:54.002 11:49:39 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.002 11:49:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:54.002 ************************************ 00:07:54.002 END TEST accel_comp 00:07:54.002 ************************************ 00:07:54.002 11:49:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:54.002 11:49:39 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:54.002 11:49:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.002 11:49:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.002 ************************************ 00:07:54.002 START TEST accel_decomp 00:07:54.002 ************************************ 00:07:54.002 11:49:39 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:54.002 11:49:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:54.002 [2024-07-25 11:49:39.791799] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:54.002 [2024-07-25 11:49:39.791854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062899 ] 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:54.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.002 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:54.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.003 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:54.003 [2024-07-25 11:49:39.921189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.003 [2024-07-25 11:49:40.003180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:54.003 11:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:55.378 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.379 11:49:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.379 00:07:55.379 real 0m1.450s 00:07:55.379 user 0m0.010s 00:07:55.379 sys 0m0.002s 00:07:55.379 11:49:41 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.379 11:49:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:55.379 ************************************ 00:07:55.379 END TEST accel_decomp 00:07:55.379 ************************************ 00:07:55.379 11:49:41 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:55.379 11:49:41 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:55.379 11:49:41 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.379 11:49:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.379 ************************************ 00:07:55.379 START TEST accel_decomp_full 00:07:55.379 ************************************ 00:07:55.379 11:49:41 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:55.379 11:49:41 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:55.379 [2024-07-25 11:49:41.327810] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:55.379 [2024-07-25 11:49:41.327944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063187 ] 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:55.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.379 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:55.638 [2024-07-25 11:49:41.531589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.638 [2024-07-25 11:49:41.619505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.638 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.638 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.638 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:55.639 11:49:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:57.017 11:49:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.017 00:07:57.017 real 0m1.568s 00:07:57.017 user 0m0.012s 00:07:57.017 sys 0m0.000s 00:07:57.017 11:49:42 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.017 11:49:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:57.017 ************************************ 00:07:57.017 END TEST accel_decomp_full 00:07:57.017 ************************************ 00:07:57.017 11:49:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:57.017 11:49:42 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:57.017 11:49:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.017 11:49:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.017 ************************************ 00:07:57.017 START TEST accel_decomp_mcore 00:07:57.017 ************************************ 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:57.017 11:49:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:57.018 [2024-07-25 11:49:42.953967] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:57.018 [2024-07-25 11:49:42.954021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063465 ] 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:57.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:57.018 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:57.018 [2024-07-25 11:49:43.085308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.277 [2024-07-25 11:49:43.172803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.277 [2024-07-25 11:49:43.172897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.277 [2024-07-25 11:49:43.172981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.277 [2024-07-25 11:49:43.172985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:57.277 11:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.653 00:07:58.653 real 0m1.473s 00:07:58.653 user 0m4.673s 00:07:58.653 sys 0m0.192s 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.653 11:49:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:58.653 ************************************ 00:07:58.653 END TEST accel_decomp_mcore 00:07:58.653 ************************************ 00:07:58.653 11:49:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:58.653 11:49:44 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:58.653 11:49:44 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.653 11:49:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.653 ************************************ 00:07:58.653 START TEST accel_decomp_full_mcore 00:07:58.653 ************************************ 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:58.653 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:58.653 [2024-07-25 11:49:44.508699] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:58.653 [2024-07-25 11:49:44.508760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063756 ] 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.653 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.654 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.654 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.654 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.654 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.654 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:58.654 [2024-07-25 11:49:44.642973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.654 [2024-07-25 11:49:44.730623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.654 [2024-07-25 11:49:44.730717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.654 [2024-07-25 11:49:44.730800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.654 [2024-07-25 11:49:44.730803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:58.913 11:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.850 00:07:59.850 real 0m1.489s 00:07:59.850 user 0m4.719s 00:07:59.850 sys 0m0.204s 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.850 11:49:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:59.850 ************************************ 00:07:59.850 END TEST accel_decomp_full_mcore 00:07:59.850 ************************************ 00:08:00.109 11:49:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:00.109 11:49:46 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:00.109 11:49:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.109 11:49:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.109 ************************************ 00:08:00.109 START TEST accel_decomp_mthread 00:08:00.109 ************************************ 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:00.109 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:00.109 [2024-07-25 11:49:46.084549] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:00.109 [2024-07-25 11:49:46.084609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064036 ] 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:00.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.109 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:00.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.110 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:00.110 [2024-07-25 11:49:46.217733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.368 [2024-07-25 11:49:46.301565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.368 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:00.369 11:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.748 00:08:01.748 real 0m1.479s 00:08:01.748 user 0m1.288s 00:08:01.748 sys 0m0.192s 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.748 11:49:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:01.748 ************************************ 00:08:01.748 END TEST accel_decomp_mthread 00:08:01.748 ************************************ 00:08:01.748 11:49:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:01.748 11:49:47 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:01.748 11:49:47 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.748 11:49:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.748 ************************************ 00:08:01.748 START TEST accel_decomp_full_mthread 00:08:01.748 ************************************ 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:01.748 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:01.748 [2024-07-25 11:49:47.644539] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:01.748 [2024-07-25 11:49:47.644598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064324 ] 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.748 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:01.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.749 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:01.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.749 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:01.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.749 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:01.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.749 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:01.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.749 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:01.749 [2024-07-25 11:49:47.778371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.749 [2024-07-25 11:49:47.861436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 11:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.385 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.386 00:08:03.386 real 0m1.499s 00:08:03.386 user 0m1.324s 00:08:03.386 sys 0m0.179s 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.386 11:49:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:03.386 ************************************ 00:08:03.386 END TEST accel_decomp_full_mthread 00:08:03.386 ************************************ 00:08:03.386 11:49:49 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:08:03.386 11:49:49 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:08:03.386 11:49:49 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:08:03.386 11:49:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:03.386 11:49:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4064602 00:08:03.386 11:49:49 accel -- accel/accel.sh@63 -- # waitforlisten 4064602 00:08:03.386 11:49:49 accel -- common/autotest_common.sh@831 -- # '[' -z 4064602 ']' 00:08:03.386 11:49:49 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.386 11:49:49 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.386 11:49:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:03.386 11:49:49 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.386 11:49:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:03.386 11:49:49 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.386 11:49:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.386 11:49:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.386 11:49:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.386 11:49:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.386 11:49:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.386 11:49:49 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:03.386 11:49:49 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:03.386 11:49:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:03.386 11:49:49 accel -- accel/accel.sh@41 -- # jq -r . 00:08:03.386 [2024-07-25 11:49:49.219989] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:03.386 [2024-07-25 11:49:49.220051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064602 ] 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:03.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.386 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:03.386 [2024-07-25 11:49:49.352229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.386 [2024-07-25 11:49:49.439121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.318 [2024-07-25 11:49:50.137457] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:04.318 11:49:50 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.318 11:49:50 accel -- common/autotest_common.sh@864 -- # return 0 00:08:04.318 11:49:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:04.318 11:49:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:04.319 11:49:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:04.319 11:49:50 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:08:04.319 11:49:50 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:08:04.319 11:49:50 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:08:04.319 11:49:50 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:08:04.319 11:49:50 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.319 11:49:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.319 11:49:50 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:08:04.577 11:49:50 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.577 "method": "compressdev_scan_accel_module", 00:08:04.577 11:49:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:04.577 11:49:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:04.577 11:49:50 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.577 11:49:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.577 11:49:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:04.577 11:49:50 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.577 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.577 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.577 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.577 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.577 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.577 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.577 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.577 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.578 11:49:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.578 11:49:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.578 11:49:50 accel -- accel/accel.sh@75 -- # killprocess 4064602 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@950 -- # '[' -z 4064602 ']' 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@954 -- # kill -0 4064602 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@955 -- # uname 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4064602 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4064602' 00:08:04.578 killing process with pid 4064602 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@969 -- # kill 4064602 00:08:04.578 11:49:50 accel -- common/autotest_common.sh@974 -- # wait 4064602 00:08:04.836 11:49:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:04.836 11:49:50 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:04.836 11:49:50 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:04.836 11:49:50 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.836 11:49:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.096 ************************************ 00:08:05.096 START TEST accel_cdev_comp 00:08:05.096 ************************************ 00:08:05.096 11:49:50 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:05.096 11:49:50 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:08:05.096 [2024-07-25 11:49:51.009640] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:05.096 [2024-07-25 11:49:51.009696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064932 ] 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:05.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.096 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:05.096 [2024-07-25 11:49:51.140561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.355 [2024-07-25 11:49:51.223859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.923 [2024-07-25 11:49:51.913787] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:05.923 [2024-07-25 11:49:51.916152] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x101efe0 PMD being used: compress_qat 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 [2024-07-25 11:49:51.919933] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1223d30 PMD being used: compress_qat 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.923 11:49:51 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:07.305 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.306 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.306 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.306 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:07.306 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:07.306 11:49:53 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:07.306 00:08:07.306 real 0m2.094s 00:08:07.306 user 0m0.011s 00:08:07.306 sys 0m0.001s 00:08:07.306 11:49:53 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.306 11:49:53 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:08:07.306 ************************************ 00:08:07.306 END TEST accel_cdev_comp 00:08:07.306 ************************************ 00:08:07.306 11:49:53 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:07.306 11:49:53 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:07.306 11:49:53 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.306 11:49:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.306 ************************************ 00:08:07.306 START TEST accel_cdev_decomp 00:08:07.306 ************************************ 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:07.306 11:49:53 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:07.306 [2024-07-25 11:49:53.186690] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:07.306 [2024-07-25 11:49:53.186814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065426 ] 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:07.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:07.306 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:07.306 [2024-07-25 11:49:53.390713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.565 [2024-07-25 11:49:53.477465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.132 [2024-07-25 11:49:54.164440] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:08.132 [2024-07-25 11:49:54.166810] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1105fe0 PMD being used: compress_qat 00:08:08.132 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.132 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.132 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 [2024-07-25 11:49:54.170732] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x130ad30 PMD being used: compress_qat 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.133 11:49:54 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:09.512 00:08:09.512 real 0m2.181s 00:08:09.512 user 0m0.011s 00:08:09.512 sys 0m0.001s 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.512 11:49:55 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:09.512 ************************************ 00:08:09.512 END TEST accel_cdev_decomp 00:08:09.512 ************************************ 00:08:09.512 11:49:55 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.512 11:49:55 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:09.512 11:49:55 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.512 11:49:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.512 ************************************ 00:08:09.512 START TEST accel_cdev_decomp_full 00:08:09.512 ************************************ 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:09.512 11:49:55 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:09.512 [2024-07-25 11:49:55.430691] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:09.513 [2024-07-25 11:49:55.430749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065716 ] 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:09.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.513 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:09.513 [2024-07-25 11:49:55.562923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.772 [2024-07-25 11:49:55.646181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.341 [2024-07-25 11:49:56.337492] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:10.341 [2024-07-25 11:49:56.339856] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc85fe0 PMD being used: compress_qat 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 [2024-07-25 11:49:56.342903] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc892b0 PMD being used: compress_qat 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.341 11:49:56 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:11.720 00:08:11.720 real 0m2.098s 00:08:11.720 user 0m0.010s 00:08:11.720 sys 0m0.002s 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.720 11:49:57 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:11.720 ************************************ 00:08:11.720 END TEST accel_cdev_decomp_full 00:08:11.720 ************************************ 00:08:11.720 11:49:57 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.720 11:49:57 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:11.720 11:49:57 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.720 11:49:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.720 ************************************ 00:08:11.720 START TEST accel_cdev_decomp_mcore 00:08:11.720 ************************************ 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:11.720 11:49:57 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:11.720 [2024-07-25 11:49:57.603171] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:11.720 [2024-07-25 11:49:57.603226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066191 ] 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.720 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:11.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:11.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.721 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:11.721 [2024-07-25 11:49:57.740062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.721 [2024-07-25 11:49:57.830363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.721 [2024-07-25 11:49:57.830457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.721 [2024-07-25 11:49:57.830479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.721 [2024-07-25 11:49:57.830487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.659 [2024-07-25 11:49:58.518012] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:12.659 [2024-07-25 11:49:58.520386] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc38600 PMD being used: compress_qat 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 [2024-07-25 11:49:58.525569] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f456419b8b0 PMD being used: compress_qat 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 [2024-07-25 11:49:58.526458] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f455c19b8b0 PMD being used: compress_qat 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.659 [2024-07-25 11:49:58.527265] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc3d890 PMD being used: compress_qat 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 [2024-07-25 11:49:58.527436] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f455419b8b0 PMD being used: compress_qat 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:12.659 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.660 11:49:58 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.598 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:13.599 00:08:13.599 real 0m2.126s 00:08:13.599 user 0m6.901s 00:08:13.599 sys 0m0.544s 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.599 11:49:59 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:13.599 ************************************ 00:08:13.599 END TEST accel_cdev_decomp_mcore 00:08:13.599 ************************************ 00:08:13.858 11:49:59 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.858 11:49:59 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:13.858 11:49:59 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.858 11:49:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 ************************************ 00:08:13.858 START TEST accel_cdev_decomp_full_mcore 00:08:13.858 ************************************ 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:13.858 11:49:59 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:13.858 [2024-07-25 11:49:59.810212] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:13.858 [2024-07-25 11:49:59.810269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066549 ] 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:13.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.858 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:13.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.859 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:13.859 [2024-07-25 11:49:59.946891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.118 [2024-07-25 11:50:00.039799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.118 [2024-07-25 11:50:00.039892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.118 [2024-07-25 11:50:00.039979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.118 [2024-07-25 11:50:00.039983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.687 [2024-07-25 11:50:00.739106] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:14.687 [2024-07-25 11:50:00.741498] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e97600 PMD being used: compress_qat 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 [2024-07-25 11:50:00.745796] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f2e0c19b8b0 PMD being used: compress_qat 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:14.687 [2024-07-25 11:50:00.746621] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f2e0419b8b0 PMD being used: compress_qat 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 [2024-07-25 11:50:00.747481] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e976a0 PMD being used: compress_qat 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 [2024-07-25 11:50:00.747678] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f2dfc19b8b0 PMD being used: compress_qat 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.687 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.688 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:14.688 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:14.688 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:14.688 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:14.688 11:50:00 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.076 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.077 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:16.077 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.077 11:50:01 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:16.077 00:08:16.077 real 0m2.143s 00:08:16.077 user 0m6.948s 00:08:16.077 sys 0m0.553s 00:08:16.077 11:50:01 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.077 11:50:01 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:16.077 ************************************ 00:08:16.077 END TEST accel_cdev_decomp_full_mcore 00:08:16.077 ************************************ 00:08:16.077 11:50:01 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:16.077 11:50:01 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:16.077 11:50:01 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.077 11:50:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.077 ************************************ 00:08:16.077 START TEST accel_cdev_decomp_mthread 00:08:16.077 ************************************ 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:16.077 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:16.077 [2024-07-25 11:50:02.036546] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:16.077 [2024-07-25 11:50:02.036606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066898 ] 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:16.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.077 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:16.077 [2024-07-25 11:50:02.168865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.337 [2024-07-25 11:50:02.252576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.906 [2024-07-25 11:50:02.939454] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:16.906 [2024-07-25 11:50:02.941843] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2171fe0 PMD being used: compress_qat 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 [2024-07-25 11:50:02.946422] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2177180 PMD being used: compress_qat 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 [2024-07-25 11:50:02.948671] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2299b20 PMD being used: compress_qat 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 11:50:02 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.298 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.298 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:18.299 00:08:18.299 real 0m2.102s 00:08:18.299 user 0m1.586s 00:08:18.299 sys 0m0.517s 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.299 11:50:04 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:18.299 ************************************ 00:08:18.299 END TEST accel_cdev_decomp_mthread 00:08:18.299 ************************************ 00:08:18.299 11:50:04 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.299 11:50:04 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:18.299 11:50:04 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.299 11:50:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.299 ************************************ 00:08:18.299 START TEST accel_cdev_decomp_full_mthread 00:08:18.299 ************************************ 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:18.299 11:50:04 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:18.299 [2024-07-25 11:50:04.223151] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:18.299 [2024-07-25 11:50:04.223210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067375 ] 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:18.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.299 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:18.299 [2024-07-25 11:50:04.355321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.558 [2024-07-25 11:50:04.438520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.126 [2024-07-25 11:50:05.122499] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:19.126 [2024-07-25 11:50:05.124873] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23abfe0 PMD being used: compress_qat 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.126 [2024-07-25 11:50:05.128658] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23ac080 PMD being used: compress_qat 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.126 [2024-07-25 11:50:05.131122] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25b0c10 PMD being used: compress_qat 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.126 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.127 11:50:05 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:20.505 00:08:20.505 real 0m2.098s 00:08:20.505 user 0m1.576s 00:08:20.505 sys 0m0.527s 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.505 11:50:06 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:20.505 ************************************ 00:08:20.505 END TEST accel_cdev_decomp_full_mthread 00:08:20.505 ************************************ 00:08:20.505 11:50:06 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:08:20.505 11:50:06 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:20.505 11:50:06 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:20.505 11:50:06 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:20.505 11:50:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.505 11:50:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.505 11:50:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.505 11:50:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.505 11:50:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.505 11:50:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.505 11:50:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.505 11:50:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:20.505 11:50:06 accel -- accel/accel.sh@41 -- # jq -r . 00:08:20.506 ************************************ 00:08:20.506 START TEST accel_dif_functional_tests 00:08:20.506 ************************************ 00:08:20.506 11:50:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:20.506 [2024-07-25 11:50:06.427126] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:20.506 [2024-07-25 11:50:06.427186] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067668 ] 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:20.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.506 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:20.506 [2024-07-25 11:50:06.559733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.765 [2024-07-25 11:50:06.646033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.765 [2024-07-25 11:50:06.646125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.765 [2024-07-25 11:50:06.646126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.765 00:08:20.765 00:08:20.765 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.765 http://cunit.sourceforge.net/ 00:08:20.765 00:08:20.765 00:08:20.765 Suite: accel_dif 00:08:20.765 Test: verify: DIF generated, GUARD check ...passed 00:08:20.765 Test: verify: DIF generated, APPTAG check ...passed 00:08:20.765 Test: verify: DIF generated, REFTAG check ...passed 00:08:20.765 Test: verify: DIF not generated, GUARD check ...[2024-07-25 11:50:06.727651] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:20.765 passed 00:08:20.765 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 11:50:06.727719] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:20.765 passed 00:08:20.765 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 11:50:06.727750] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:20.765 passed 00:08:20.765 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:20.765 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 11:50:06.727815] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:20.765 passed 00:08:20.765 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:20.765 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:20.765 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:20.765 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 11:50:06.727956] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:20.765 passed 00:08:20.765 Test: verify copy: DIF generated, GUARD check ...passed 00:08:20.765 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:20.765 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:20.765 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 11:50:06.728115] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:20.765 passed 00:08:20.765 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 11:50:06.728153] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:20.765 passed 00:08:20.765 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 11:50:06.728184] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:20.765 passed 00:08:20.765 Test: generate copy: DIF generated, GUARD check ...passed 00:08:20.765 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:20.765 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:20.765 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:20.765 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:20.765 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:20.765 Test: generate copy: iovecs-len validate ...[2024-07-25 11:50:06.728416] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:20.765 passed 00:08:20.765 Test: generate copy: buffer alignment validate ...passed 00:08:20.765 00:08:20.765 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.765 suites 1 1 n/a 0 0 00:08:20.765 tests 26 26 26 0 0 00:08:20.765 asserts 115 115 115 0 n/a 00:08:20.765 00:08:20.765 Elapsed time = 0.002 seconds 00:08:21.024 00:08:21.024 real 0m0.542s 00:08:21.024 user 0m0.663s 00:08:21.024 sys 0m0.219s 00:08:21.024 11:50:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.024 11:50:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:21.024 ************************************ 00:08:21.024 END TEST accel_dif_functional_tests 00:08:21.024 ************************************ 00:08:21.024 00:08:21.024 real 0m51.570s 00:08:21.024 user 0m59.457s 00:08:21.024 sys 0m11.296s 00:08:21.024 11:50:06 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.024 11:50:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.024 ************************************ 00:08:21.024 END TEST accel 00:08:21.024 ************************************ 00:08:21.024 11:50:07 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:21.024 11:50:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.024 11:50:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.024 11:50:07 -- common/autotest_common.sh@10 -- # set +x 00:08:21.024 ************************************ 00:08:21.024 START TEST accel_rpc 00:08:21.024 ************************************ 00:08:21.024 11:50:07 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:21.284 * Looking for test storage... 00:08:21.284 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:08:21.284 11:50:07 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:21.284 11:50:07 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4067982 00:08:21.284 11:50:07 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4067982 00:08:21.284 11:50:07 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:21.284 11:50:07 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 4067982 ']' 00:08:21.284 11:50:07 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.284 11:50:07 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.284 11:50:07 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.284 11:50:07 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.284 11:50:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.284 [2024-07-25 11:50:07.225952] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:21.284 [2024-07-25 11:50:07.226016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067982 ] 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.284 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:21.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.285 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:21.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.285 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:21.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.285 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:21.285 [2024-07-25 11:50:07.359062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.543 [2024-07-25 11:50:07.444441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.137 11:50:08 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.137 11:50:08 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:22.137 11:50:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:22.137 11:50:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:22.137 11:50:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:22.137 11:50:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:22.137 11:50:08 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:22.137 11:50:08 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.137 11:50:08 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.137 11:50:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.137 ************************************ 00:08:22.137 START TEST accel_assign_opcode 00:08:22.137 ************************************ 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.137 [2024-07-25 11:50:08.150635] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.137 [2024-07-25 11:50:08.162661] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.137 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.396 software 00:08:22.396 00:08:22.396 real 0m0.292s 00:08:22.396 user 0m0.046s 00:08:22.396 sys 0m0.016s 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.396 11:50:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 ************************************ 00:08:22.396 END TEST accel_assign_opcode 00:08:22.396 ************************************ 00:08:22.396 11:50:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4067982 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 4067982 ']' 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 4067982 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4067982 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4067982' 00:08:22.396 killing process with pid 4067982 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@969 -- # kill 4067982 00:08:22.396 11:50:08 accel_rpc -- common/autotest_common.sh@974 -- # wait 4067982 00:08:22.964 00:08:22.964 real 0m1.796s 00:08:22.964 user 0m1.829s 00:08:22.964 sys 0m0.553s 00:08:22.964 11:50:08 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.964 11:50:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.964 ************************************ 00:08:22.964 END TEST accel_rpc 00:08:22.964 ************************************ 00:08:22.964 11:50:08 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:08:22.964 11:50:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.964 11:50:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.964 11:50:08 -- common/autotest_common.sh@10 -- # set +x 00:08:22.964 ************************************ 00:08:22.964 START TEST app_cmdline 00:08:22.964 ************************************ 00:08:22.964 11:50:08 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:08:22.964 * Looking for test storage... 00:08:22.964 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:08:22.964 11:50:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:22.964 11:50:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:22.964 11:50:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4068328 00:08:22.964 11:50:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4068328 00:08:22.964 11:50:09 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 4068328 ']' 00:08:22.964 11:50:09 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.964 11:50:09 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.964 11:50:09 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.964 11:50:09 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.964 11:50:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 [2024-07-25 11:50:09.088828] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:23.224 [2024-07-25 11:50:09.088892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068328 ] 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:23.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.224 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:23.224 [2024-07-25 11:50:09.221985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.224 [2024-07-25 11:50:09.308305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.160 11:50:09 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.160 11:50:09 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:24.160 11:50:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:24.160 { 00:08:24.160 "version": "SPDK v24.09-pre git sha1 415e0bb41", 00:08:24.160 "fields": { 00:08:24.160 "major": 24, 00:08:24.160 "minor": 9, 00:08:24.161 "patch": 0, 00:08:24.161 "suffix": "-pre", 00:08:24.161 "commit": "415e0bb41" 00:08:24.161 } 00:08:24.161 } 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:24.161 11:50:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:08:24.161 11:50:10 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.420 request: 00:08:24.420 { 00:08:24.420 "method": "env_dpdk_get_mem_stats", 00:08:24.420 "req_id": 1 00:08:24.420 } 00:08:24.420 Got JSON-RPC error response 00:08:24.420 response: 00:08:24.420 { 00:08:24.420 "code": -32601, 00:08:24.420 "message": "Method not found" 00:08:24.420 } 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.420 11:50:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4068328 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 4068328 ']' 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 4068328 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4068328 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4068328' 00:08:24.420 killing process with pid 4068328 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@969 -- # kill 4068328 00:08:24.420 11:50:10 app_cmdline -- common/autotest_common.sh@974 -- # wait 4068328 00:08:24.988 00:08:24.988 real 0m1.932s 00:08:24.988 user 0m2.355s 00:08:24.988 sys 0m0.531s 00:08:24.988 11:50:10 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.988 11:50:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:24.988 ************************************ 00:08:24.988 END TEST app_cmdline 00:08:24.988 ************************************ 00:08:24.988 11:50:10 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:08:24.988 11:50:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.988 11:50:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.988 11:50:10 -- common/autotest_common.sh@10 -- # set +x 00:08:24.988 ************************************ 00:08:24.988 START TEST version 00:08:24.988 ************************************ 00:08:24.988 11:50:10 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:08:24.988 * Looking for test storage... 00:08:24.988 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:08:24.988 11:50:11 version -- app/version.sh@17 -- # get_header_version major 00:08:24.988 11:50:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # cut -f2 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:24.988 11:50:11 version -- app/version.sh@17 -- # major=24 00:08:24.988 11:50:11 version -- app/version.sh@18 -- # get_header_version minor 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # cut -f2 00:08:24.988 11:50:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:24.988 11:50:11 version -- app/version.sh@18 -- # minor=9 00:08:24.988 11:50:11 version -- app/version.sh@19 -- # get_header_version patch 00:08:24.988 11:50:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # cut -f2 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:24.988 11:50:11 version -- app/version.sh@19 -- # patch=0 00:08:24.988 11:50:11 version -- app/version.sh@20 -- # get_header_version suffix 00:08:24.988 11:50:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # cut -f2 00:08:24.988 11:50:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:24.988 11:50:11 version -- app/version.sh@20 -- # suffix=-pre 00:08:24.988 11:50:11 version -- app/version.sh@22 -- # version=24.9 00:08:24.988 11:50:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:24.988 11:50:11 version -- app/version.sh@28 -- # version=24.9rc0 00:08:24.988 11:50:11 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:08:24.988 11:50:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:25.247 11:50:11 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:25.247 11:50:11 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:25.247 00:08:25.247 real 0m0.194s 00:08:25.247 user 0m0.095s 00:08:25.247 sys 0m0.148s 00:08:25.247 11:50:11 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.247 11:50:11 version -- common/autotest_common.sh@10 -- # set +x 00:08:25.247 ************************************ 00:08:25.247 END TEST version 00:08:25.247 ************************************ 00:08:25.247 11:50:11 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:08:25.247 11:50:11 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:08:25.247 11:50:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.247 11:50:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.247 11:50:11 -- common/autotest_common.sh@10 -- # set +x 00:08:25.247 ************************************ 00:08:25.247 START TEST blockdev_general 00:08:25.247 ************************************ 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:08:25.247 * Looking for test storage... 00:08:25.247 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:25.247 11:50:11 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=4068748 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:08:25.247 11:50:11 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 4068748 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 4068748 ']' 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.247 11:50:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:25.506 [2024-07-25 11:50:11.415843] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:25.506 [2024-07-25 11:50:11.415906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068748 ] 00:08:25.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.506 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:25.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:25.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.507 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:25.507 [2024-07-25 11:50:11.549182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.766 [2024-07-25 11:50:11.635676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.334 11:50:12 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.334 11:50:12 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:08:26.334 11:50:12 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:26.334 11:50:12 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:08:26.334 11:50:12 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:08:26.334 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.334 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:26.593 [2024-07-25 11:50:12.534029] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:26.593 [2024-07-25 11:50:12.534083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:26.593 00:08:26.593 [2024-07-25 11:50:12.542019] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:26.593 [2024-07-25 11:50:12.542042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:26.593 00:08:26.593 Malloc0 00:08:26.593 Malloc1 00:08:26.593 Malloc2 00:08:26.593 Malloc3 00:08:26.593 Malloc4 00:08:26.593 Malloc5 00:08:26.593 Malloc6 00:08:26.593 Malloc7 00:08:26.593 Malloc8 00:08:26.593 Malloc9 00:08:26.593 [2024-07-25 11:50:12.675973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:26.593 [2024-07-25 11:50:12.676018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.593 [2024-07-25 11:50:12.676035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa96850 00:08:26.593 [2024-07-25 11:50:12.676047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.593 [2024-07-25 11:50:12.677283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.593 [2024-07-25 11:50:12.677311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:26.593 TestPT 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:08:26.852 5000+0 records in 00:08:26.852 5000+0 records out 00:08:26.852 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0152289 s, 672 MB/s 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:26.852 AIO0 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:26.852 11:50:12 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.852 11:50:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:27.111 11:50:13 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.371 11:50:13 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:27.371 11:50:13 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:27.372 11:50:13 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "16964ba6-6b6e-4b91-b9d4-885b3a2061a2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "16964ba6-6b6e-4b91-b9d4-885b3a2061a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8062424f-e9c8-567d-90b4-25e1be2484af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8062424f-e9c8-567d-90b4-25e1be2484af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "4943d32c-7a27-5eda-a458-fe32a30ad1cc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4943d32c-7a27-5eda-a458-fe32a30ad1cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "08897617-f3e4-54b4-8083-11a9f12c8bd6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "08897617-f3e4-54b4-8083-11a9f12c8bd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "705d23a9-1422-58d5-93fc-0eb31523f6bb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "705d23a9-1422-58d5-93fc-0eb31523f6bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "49b497b9-aa62-5cae-bc70-850778b7257b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "49b497b9-aa62-5cae-bc70-850778b7257b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "856f20c5-c6a1-5e85-8b30-0bc5b45fa38c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "856f20c5-c6a1-5e85-8b30-0bc5b45fa38c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7e480c28-62df-5616-bb0e-6c888af5e47b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7e480c28-62df-5616-bb0e-6c888af5e47b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e8f48de5-3f93-519c-ba85-a647b5687cdb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8f48de5-3f93-519c-ba85-a647b5687cdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "3405e053-38ae-54ba-803c-a328572b2f22"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3405e053-38ae-54ba-803c-a328572b2f22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "88283af2-286e-5396-beca-3dff349fd679"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "88283af2-286e-5396-beca-3dff349fd679",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "54355e8d-91c2-59f5-b50a-1e0dbd2e3287"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "54355e8d-91c2-59f5-b50a-1e0dbd2e3287",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "fe3347b7-9cff-4278-a174-84dab60193fb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fe3347b7-9cff-4278-a174-84dab60193fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fe3347b7-9cff-4278-a174-84dab60193fb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4e012245-ae0c-4a65-b1ac-ba4f53df4f11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "bc65f017-9a0f-42e0-8f5d-06fa50afb5ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "306df3c8-1e26-4c24-988b-ba3af8ffedf4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "306df3c8-1e26-4c24-988b-ba3af8ffedf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "306df3c8-1e26-4c24-988b-ba3af8ffedf4",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b97a1664-def0-47bb-b9e5-39f0b33999e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b145967a-cd56-49bd-afff-5aaba78d4909",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ab73a78b-ed31-4e80-bc77-a82d148f0a83"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ab73a78b-ed31-4e80-bc77-a82d148f0a83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ab73a78b-ed31-4e80-bc77-a82d148f0a83",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d0875e5d-b742-4b99-8194-e3c0311d47da",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "415d5740-7c36-4d10-bc01-e67f61a33dd1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "59a636f4-9d5a-40c7-9f31-62dae7bd93bc"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "59a636f4-9d5a-40c7-9f31-62dae7bd93bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:27.372 11:50:13 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:27.372 11:50:13 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:08:27.372 11:50:13 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:27.372 11:50:13 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 4068748 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 4068748 ']' 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 4068748 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4068748 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4068748' 00:08:27.372 killing process with pid 4068748 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@969 -- # kill 4068748 00:08:27.372 11:50:13 blockdev_general -- common/autotest_common.sh@974 -- # wait 4068748 00:08:27.940 11:50:13 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:27.940 11:50:13 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:08:27.940 11:50:13 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:27.941 11:50:13 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.941 11:50:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:27.941 ************************************ 00:08:27.941 START TEST bdev_hello_world 00:08:27.941 ************************************ 00:08:27.941 11:50:13 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:08:27.941 [2024-07-25 11:50:13.846419] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:27.941 [2024-07-25 11:50:13.846473] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069265 ] 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:27.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.941 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.941 [2024-07-25 11:50:13.978509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.201 [2024-07-25 11:50:14.059080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.201 [2024-07-25 11:50:14.213939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:28.201 [2024-07-25 11:50:14.214001] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:28.201 [2024-07-25 11:50:14.214016] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:28.201 [2024-07-25 11:50:14.221946] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:28.201 [2024-07-25 11:50:14.221972] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:28.201 [2024-07-25 11:50:14.229957] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:28.201 [2024-07-25 11:50:14.229979] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:28.201 [2024-07-25 11:50:14.301335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:28.201 [2024-07-25 11:50:14.301385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.201 [2024-07-25 11:50:14.301400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23cf690 00:08:28.201 [2024-07-25 11:50:14.301411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.201 [2024-07-25 11:50:14.302695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.201 [2024-07-25 11:50:14.302725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:28.460 [2024-07-25 11:50:14.446605] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:28.460 [2024-07-25 11:50:14.446677] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:08:28.460 [2024-07-25 11:50:14.446732] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:28.460 [2024-07-25 11:50:14.446806] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:28.460 [2024-07-25 11:50:14.446885] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:28.460 [2024-07-25 11:50:14.446916] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:28.461 [2024-07-25 11:50:14.446979] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:28.461 00:08:28.461 [2024-07-25 11:50:14.447020] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:28.720 00:08:28.720 real 0m0.931s 00:08:28.720 user 0m0.589s 00:08:28.720 sys 0m0.296s 00:08:28.720 11:50:14 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.720 11:50:14 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:28.720 ************************************ 00:08:28.720 END TEST bdev_hello_world 00:08:28.720 ************************************ 00:08:28.720 11:50:14 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:28.720 11:50:14 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.720 11:50:14 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.720 11:50:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:28.720 ************************************ 00:08:28.720 START TEST bdev_bounds 00:08:28.720 ************************************ 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=4069474 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 4069474' 00:08:28.720 Process bdevio pid: 4069474 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 4069474 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 4069474 ']' 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.720 11:50:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:28.979 [2024-07-25 11:50:14.862251] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:28.979 [2024-07-25 11:50:14.862308] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069474 ] 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.979 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:28.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.980 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:28.980 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.980 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:28.980 [2024-07-25 11:50:14.994907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.980 [2024-07-25 11:50:15.082602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.980 [2024-07-25 11:50:15.082695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.980 [2024-07-25 11:50:15.082699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.239 [2024-07-25 11:50:15.222999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:29.239 [2024-07-25 11:50:15.223047] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:29.239 [2024-07-25 11:50:15.223060] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:29.239 [2024-07-25 11:50:15.231010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:29.239 [2024-07-25 11:50:15.231036] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:29.239 [2024-07-25 11:50:15.239026] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:29.239 [2024-07-25 11:50:15.239048] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:29.239 [2024-07-25 11:50:15.310495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:29.239 [2024-07-25 11:50:15.310543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.239 [2024-07-25 11:50:15.310559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a11080 00:08:29.239 [2024-07-25 11:50:15.310570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.239 [2024-07-25 11:50:15.311929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.239 [2024-07-25 11:50:15.311957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:29.807 11:50:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.807 11:50:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:29.807 11:50:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:29.807 I/O targets: 00:08:29.807 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:08:29.807 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:08:29.807 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:08:29.807 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:08:29.807 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:08:29.807 raid0: 131072 blocks of 512 bytes (64 MiB) 00:08:29.807 concat0: 131072 blocks of 512 bytes (64 MiB) 00:08:29.807 raid1: 65536 blocks of 512 bytes (32 MiB) 00:08:29.807 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:08:29.807 00:08:29.807 00:08:29.807 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.807 http://cunit.sourceforge.net/ 00:08:29.807 00:08:29.807 00:08:29.807 Suite: bdevio tests on: AIO0 00:08:29.807 Test: blockdev write read block ...passed 00:08:29.807 Test: blockdev write zeroes read block ...passed 00:08:29.807 Test: blockdev write zeroes read no split ...passed 00:08:29.807 Test: blockdev write zeroes read split ...passed 00:08:29.807 Test: blockdev write zeroes read split partial ...passed 00:08:29.807 Test: blockdev reset ...passed 00:08:29.807 Test: blockdev write read 8 blocks ...passed 00:08:29.807 Test: blockdev write read size > 128k ...passed 00:08:29.807 Test: blockdev write read invalid size ...passed 00:08:29.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:29.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:29.807 Test: blockdev write read max offset ...passed 00:08:29.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:29.807 Test: blockdev writev readv 8 blocks ...passed 00:08:29.807 Test: blockdev writev readv 30 x 1block ...passed 00:08:29.807 Test: blockdev writev readv block ...passed 00:08:29.807 Test: blockdev writev readv size > 128k ...passed 00:08:29.807 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:29.807 Test: blockdev comparev and writev ...passed 00:08:29.807 Test: blockdev nvme passthru rw ...passed 00:08:29.807 Test: blockdev nvme passthru vendor specific ...passed 00:08:29.807 Test: blockdev nvme admin passthru ...passed 00:08:29.807 Test: blockdev copy ...passed 00:08:29.807 Suite: bdevio tests on: raid1 00:08:29.807 Test: blockdev write read block ...passed 00:08:29.807 Test: blockdev write zeroes read block ...passed 00:08:29.807 Test: blockdev write zeroes read no split ...passed 00:08:29.807 Test: blockdev write zeroes read split ...passed 00:08:29.807 Test: blockdev write zeroes read split partial ...passed 00:08:29.807 Test: blockdev reset ...passed 00:08:29.807 Test: blockdev write read 8 blocks ...passed 00:08:29.807 Test: blockdev write read size > 128k ...passed 00:08:29.808 Test: blockdev write read invalid size ...passed 00:08:29.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:29.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:29.808 Test: blockdev write read max offset ...passed 00:08:29.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:29.808 Test: blockdev writev readv 8 blocks ...passed 00:08:29.808 Test: blockdev writev readv 30 x 1block ...passed 00:08:29.808 Test: blockdev writev readv block ...passed 00:08:29.808 Test: blockdev writev readv size > 128k ...passed 00:08:29.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:29.808 Test: blockdev comparev and writev ...passed 00:08:29.808 Test: blockdev nvme passthru rw ...passed 00:08:29.808 Test: blockdev nvme passthru vendor specific ...passed 00:08:29.808 Test: blockdev nvme admin passthru ...passed 00:08:29.808 Test: blockdev copy ...passed 00:08:29.808 Suite: bdevio tests on: concat0 00:08:29.808 Test: blockdev write read block ...passed 00:08:29.808 Test: blockdev write zeroes read block ...passed 00:08:29.808 Test: blockdev write zeroes read no split ...passed 00:08:29.808 Test: blockdev write zeroes read split ...passed 00:08:29.808 Test: blockdev write zeroes read split partial ...passed 00:08:29.808 Test: blockdev reset ...passed 00:08:29.808 Test: blockdev write read 8 blocks ...passed 00:08:29.808 Test: blockdev write read size > 128k ...passed 00:08:29.808 Test: blockdev write read invalid size ...passed 00:08:29.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:29.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:29.808 Test: blockdev write read max offset ...passed 00:08:29.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:29.808 Test: blockdev writev readv 8 blocks ...passed 00:08:29.808 Test: blockdev writev readv 30 x 1block ...passed 00:08:29.808 Test: blockdev writev readv block ...passed 00:08:29.808 Test: blockdev writev readv size > 128k ...passed 00:08:29.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:29.808 Test: blockdev comparev and writev ...passed 00:08:29.808 Test: blockdev nvme passthru rw ...passed 00:08:29.808 Test: blockdev nvme passthru vendor specific ...passed 00:08:29.808 Test: blockdev nvme admin passthru ...passed 00:08:29.808 Test: blockdev copy ...passed 00:08:29.808 Suite: bdevio tests on: raid0 00:08:29.808 Test: blockdev write read block ...passed 00:08:29.808 Test: blockdev write zeroes read block ...passed 00:08:29.808 Test: blockdev write zeroes read no split ...passed 00:08:29.808 Test: blockdev write zeroes read split ...passed 00:08:29.808 Test: blockdev write zeroes read split partial ...passed 00:08:29.808 Test: blockdev reset ...passed 00:08:29.808 Test: blockdev write read 8 blocks ...passed 00:08:29.808 Test: blockdev write read size > 128k ...passed 00:08:29.808 Test: blockdev write read invalid size ...passed 00:08:29.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:29.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:29.808 Test: blockdev write read max offset ...passed 00:08:29.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:29.808 Test: blockdev writev readv 8 blocks ...passed 00:08:29.808 Test: blockdev writev readv 30 x 1block ...passed 00:08:29.808 Test: blockdev writev readv block ...passed 00:08:29.808 Test: blockdev writev readv size > 128k ...passed 00:08:29.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:29.808 Test: blockdev comparev and writev ...passed 00:08:29.808 Test: blockdev nvme passthru rw ...passed 00:08:29.808 Test: blockdev nvme passthru vendor specific ...passed 00:08:29.808 Test: blockdev nvme admin passthru ...passed 00:08:29.808 Test: blockdev copy ...passed 00:08:29.808 Suite: bdevio tests on: TestPT 00:08:29.808 Test: blockdev write read block ...passed 00:08:29.808 Test: blockdev write zeroes read block ...passed 00:08:29.808 Test: blockdev write zeroes read no split ...passed 00:08:29.808 Test: blockdev write zeroes read split ...passed 00:08:30.068 Test: blockdev write zeroes read split partial ...passed 00:08:30.068 Test: blockdev reset ...passed 00:08:30.068 Test: blockdev write read 8 blocks ...passed 00:08:30.068 Test: blockdev write read size > 128k ...passed 00:08:30.068 Test: blockdev write read invalid size ...passed 00:08:30.068 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.068 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.068 Test: blockdev write read max offset ...passed 00:08:30.068 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.068 Test: blockdev writev readv 8 blocks ...passed 00:08:30.068 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.068 Test: blockdev writev readv block ...passed 00:08:30.068 Test: blockdev writev readv size > 128k ...passed 00:08:30.068 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.068 Test: blockdev comparev and writev ...passed 00:08:30.068 Test: blockdev nvme passthru rw ...passed 00:08:30.068 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.068 Test: blockdev nvme admin passthru ...passed 00:08:30.068 Test: blockdev copy ...passed 00:08:30.068 Suite: bdevio tests on: Malloc2p7 00:08:30.068 Test: blockdev write read block ...passed 00:08:30.068 Test: blockdev write zeroes read block ...passed 00:08:30.068 Test: blockdev write zeroes read no split ...passed 00:08:30.068 Test: blockdev write zeroes read split ...passed 00:08:30.068 Test: blockdev write zeroes read split partial ...passed 00:08:30.068 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p6 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p5 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p4 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p3 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p2 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p1 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.069 Test: blockdev writev readv 8 blocks ...passed 00:08:30.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.069 Test: blockdev writev readv block ...passed 00:08:30.069 Test: blockdev writev readv size > 128k ...passed 00:08:30.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.069 Test: blockdev comparev and writev ...passed 00:08:30.069 Test: blockdev nvme passthru rw ...passed 00:08:30.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.069 Test: blockdev nvme admin passthru ...passed 00:08:30.069 Test: blockdev copy ...passed 00:08:30.069 Suite: bdevio tests on: Malloc2p0 00:08:30.069 Test: blockdev write read block ...passed 00:08:30.069 Test: blockdev write zeroes read block ...passed 00:08:30.069 Test: blockdev write zeroes read no split ...passed 00:08:30.069 Test: blockdev write zeroes read split ...passed 00:08:30.069 Test: blockdev write zeroes read split partial ...passed 00:08:30.069 Test: blockdev reset ...passed 00:08:30.069 Test: blockdev write read 8 blocks ...passed 00:08:30.069 Test: blockdev write read size > 128k ...passed 00:08:30.069 Test: blockdev write read invalid size ...passed 00:08:30.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.069 Test: blockdev write read max offset ...passed 00:08:30.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.070 Test: blockdev writev readv 8 blocks ...passed 00:08:30.070 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.070 Test: blockdev writev readv block ...passed 00:08:30.070 Test: blockdev writev readv size > 128k ...passed 00:08:30.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.070 Test: blockdev comparev and writev ...passed 00:08:30.070 Test: blockdev nvme passthru rw ...passed 00:08:30.070 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.070 Test: blockdev nvme admin passthru ...passed 00:08:30.070 Test: blockdev copy ...passed 00:08:30.070 Suite: bdevio tests on: Malloc1p1 00:08:30.070 Test: blockdev write read block ...passed 00:08:30.070 Test: blockdev write zeroes read block ...passed 00:08:30.070 Test: blockdev write zeroes read no split ...passed 00:08:30.070 Test: blockdev write zeroes read split ...passed 00:08:30.070 Test: blockdev write zeroes read split partial ...passed 00:08:30.070 Test: blockdev reset ...passed 00:08:30.070 Test: blockdev write read 8 blocks ...passed 00:08:30.070 Test: blockdev write read size > 128k ...passed 00:08:30.070 Test: blockdev write read invalid size ...passed 00:08:30.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.070 Test: blockdev write read max offset ...passed 00:08:30.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.070 Test: blockdev writev readv 8 blocks ...passed 00:08:30.070 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.070 Test: blockdev writev readv block ...passed 00:08:30.070 Test: blockdev writev readv size > 128k ...passed 00:08:30.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.070 Test: blockdev comparev and writev ...passed 00:08:30.070 Test: blockdev nvme passthru rw ...passed 00:08:30.070 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.070 Test: blockdev nvme admin passthru ...passed 00:08:30.070 Test: blockdev copy ...passed 00:08:30.070 Suite: bdevio tests on: Malloc1p0 00:08:30.070 Test: blockdev write read block ...passed 00:08:30.070 Test: blockdev write zeroes read block ...passed 00:08:30.070 Test: blockdev write zeroes read no split ...passed 00:08:30.070 Test: blockdev write zeroes read split ...passed 00:08:30.070 Test: blockdev write zeroes read split partial ...passed 00:08:30.070 Test: blockdev reset ...passed 00:08:30.070 Test: blockdev write read 8 blocks ...passed 00:08:30.070 Test: blockdev write read size > 128k ...passed 00:08:30.070 Test: blockdev write read invalid size ...passed 00:08:30.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.070 Test: blockdev write read max offset ...passed 00:08:30.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.070 Test: blockdev writev readv 8 blocks ...passed 00:08:30.070 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.070 Test: blockdev writev readv block ...passed 00:08:30.070 Test: blockdev writev readv size > 128k ...passed 00:08:30.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.070 Test: blockdev comparev and writev ...passed 00:08:30.070 Test: blockdev nvme passthru rw ...passed 00:08:30.070 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.070 Test: blockdev nvme admin passthru ...passed 00:08:30.070 Test: blockdev copy ...passed 00:08:30.070 Suite: bdevio tests on: Malloc0 00:08:30.070 Test: blockdev write read block ...passed 00:08:30.070 Test: blockdev write zeroes read block ...passed 00:08:30.070 Test: blockdev write zeroes read no split ...passed 00:08:30.070 Test: blockdev write zeroes read split ...passed 00:08:30.070 Test: blockdev write zeroes read split partial ...passed 00:08:30.070 Test: blockdev reset ...passed 00:08:30.070 Test: blockdev write read 8 blocks ...passed 00:08:30.070 Test: blockdev write read size > 128k ...passed 00:08:30.070 Test: blockdev write read invalid size ...passed 00:08:30.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.070 Test: blockdev write read max offset ...passed 00:08:30.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.070 Test: blockdev writev readv 8 blocks ...passed 00:08:30.070 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.070 Test: blockdev writev readv block ...passed 00:08:30.070 Test: blockdev writev readv size > 128k ...passed 00:08:30.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.070 Test: blockdev comparev and writev ...passed 00:08:30.070 Test: blockdev nvme passthru rw ...passed 00:08:30.070 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.070 Test: blockdev nvme admin passthru ...passed 00:08:30.070 Test: blockdev copy ...passed 00:08:30.070 00:08:30.070 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.070 suites 16 16 n/a 0 0 00:08:30.070 tests 368 368 368 0 0 00:08:30.070 asserts 2224 2224 2224 0 n/a 00:08:30.070 00:08:30.070 Elapsed time = 0.478 seconds 00:08:30.070 0 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 4069474 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 4069474 ']' 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 4069474 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4069474 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4069474' 00:08:30.070 killing process with pid 4069474 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 4069474 00:08:30.070 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 4069474 00:08:30.329 11:50:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:30.329 00:08:30.329 real 0m1.601s 00:08:30.329 user 0m4.001s 00:08:30.329 sys 0m0.455s 00:08:30.329 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.329 11:50:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:30.329 ************************************ 00:08:30.329 END TEST bdev_bounds 00:08:30.329 ************************************ 00:08:30.589 11:50:16 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:08:30.589 11:50:16 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:30.589 11:50:16 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.589 11:50:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:30.589 ************************************ 00:08:30.589 START TEST bdev_nbd 00:08:30.589 ************************************ 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=4069839 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 4069839 /var/tmp/spdk-nbd.sock 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 4069839 ']' 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.589 11:50:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:30.589 [2024-07-25 11:50:16.557700] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:30.589 [2024-07-25 11:50:16.557759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:30.589 [2024-07-25 11:50:16.689284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.849 [2024-07-25 11:50:16.774076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.849 [2024-07-25 11:50:16.923892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:30.849 [2024-07-25 11:50:16.923949] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:30.849 [2024-07-25 11:50:16.923963] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:30.849 [2024-07-25 11:50:16.931900] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:30.849 [2024-07-25 11:50:16.931926] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:30.849 [2024-07-25 11:50:16.939916] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:30.849 [2024-07-25 11:50:16.939942] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:31.108 [2024-07-25 11:50:17.011250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:31.108 [2024-07-25 11:50:17.011297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.108 [2024-07-25 11:50:17.011312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21a7150 00:08:31.108 [2024-07-25 11:50:17.011323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.108 [2024-07-25 11:50:17.012598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.108 [2024-07-25 11:50:17.012627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:31.366 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.366 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:31.366 11:50:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:08:31.366 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.366 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:31.366 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:31.367 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.626 1+0 records in 00:08:31.626 1+0 records out 00:08:31.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231219 s, 17.7 MB/s 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:31.626 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.885 1+0 records in 00:08:31.885 1+0 records out 00:08:31.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261379 s, 15.7 MB/s 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:31.885 11:50:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:32.144 1+0 records in 00:08:32.144 1+0 records out 00:08:32.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287843 s, 14.2 MB/s 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:32.144 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:32.403 1+0 records in 00:08:32.403 1+0 records out 00:08:32.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314526 s, 13.0 MB/s 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:32.403 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:32.662 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:32.920 1+0 records in 00:08:32.920 1+0 records out 00:08:32.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384085 s, 10.7 MB/s 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:32.920 11:50:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:08:32.920 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:32.920 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.179 1+0 records in 00:08:33.179 1+0 records out 00:08:33.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379355 s, 10.8 MB/s 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:33.179 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.437 1+0 records in 00:08:33.437 1+0 records out 00:08:33.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434613 s, 9.4 MB/s 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:33.437 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.695 1+0 records in 00:08:33.695 1+0 records out 00:08:33.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494863 s, 8.3 MB/s 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.695 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:33.696 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:33.696 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.696 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:33.696 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.954 1+0 records in 00:08:33.954 1+0 records out 00:08:33.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504991 s, 8.1 MB/s 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:33.954 11:50:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.212 1+0 records in 00:08:34.212 1+0 records out 00:08:34.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508259 s, 8.1 MB/s 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:34.212 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.476 1+0 records in 00:08:34.476 1+0 records out 00:08:34.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460756 s, 8.9 MB/s 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:34.476 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.769 1+0 records in 00:08:34.769 1+0 records out 00:08:34.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662074 s, 6.2 MB/s 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:34.769 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.027 1+0 records in 00:08:35.027 1+0 records out 00:08:35.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663385 s, 6.2 MB/s 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:35.027 11:50:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:08:35.285 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.286 1+0 records in 00:08:35.286 1+0 records out 00:08:35.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641668 s, 6.4 MB/s 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:35.286 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.544 1+0 records in 00:08:35.544 1+0 records out 00:08:35.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700381 s, 5.8 MB/s 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:35.544 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.802 1+0 records in 00:08:35.802 1+0 records out 00:08:35.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053221 s, 7.7 MB/s 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:35.802 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:35.803 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.803 11:50:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:35.803 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.803 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:35.803 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.061 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd0", 00:08:36.061 "bdev_name": "Malloc0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd1", 00:08:36.061 "bdev_name": "Malloc1p0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd2", 00:08:36.061 "bdev_name": "Malloc1p1" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd3", 00:08:36.061 "bdev_name": "Malloc2p0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd4", 00:08:36.061 "bdev_name": "Malloc2p1" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd5", 00:08:36.061 "bdev_name": "Malloc2p2" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd6", 00:08:36.061 "bdev_name": "Malloc2p3" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd7", 00:08:36.061 "bdev_name": "Malloc2p4" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd8", 00:08:36.061 "bdev_name": "Malloc2p5" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd9", 00:08:36.061 "bdev_name": "Malloc2p6" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd10", 00:08:36.061 "bdev_name": "Malloc2p7" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd11", 00:08:36.061 "bdev_name": "TestPT" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd12", 00:08:36.061 "bdev_name": "raid0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd13", 00:08:36.061 "bdev_name": "concat0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd14", 00:08:36.061 "bdev_name": "raid1" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd15", 00:08:36.061 "bdev_name": "AIO0" 00:08:36.061 } 00:08:36.061 ]' 00:08:36.061 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:36.061 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:36.061 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd0", 00:08:36.061 "bdev_name": "Malloc0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd1", 00:08:36.061 "bdev_name": "Malloc1p0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd2", 00:08:36.061 "bdev_name": "Malloc1p1" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd3", 00:08:36.061 "bdev_name": "Malloc2p0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd4", 00:08:36.061 "bdev_name": "Malloc2p1" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd5", 00:08:36.061 "bdev_name": "Malloc2p2" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd6", 00:08:36.061 "bdev_name": "Malloc2p3" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd7", 00:08:36.061 "bdev_name": "Malloc2p4" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd8", 00:08:36.061 "bdev_name": "Malloc2p5" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd9", 00:08:36.061 "bdev_name": "Malloc2p6" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd10", 00:08:36.061 "bdev_name": "Malloc2p7" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd11", 00:08:36.061 "bdev_name": "TestPT" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd12", 00:08:36.061 "bdev_name": "raid0" 00:08:36.061 }, 00:08:36.061 { 00:08:36.061 "nbd_device": "/dev/nbd13", 00:08:36.062 "bdev_name": "concat0" 00:08:36.062 }, 00:08:36.062 { 00:08:36.062 "nbd_device": "/dev/nbd14", 00:08:36.062 "bdev_name": "raid1" 00:08:36.062 }, 00:08:36.062 { 00:08:36.062 "nbd_device": "/dev/nbd15", 00:08:36.062 "bdev_name": "AIO0" 00:08:36.062 } 00:08:36.062 ]' 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.062 11:50:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.320 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.577 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.836 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.095 11:50:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.095 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.354 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.613 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.872 11:50:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.131 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.389 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.648 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.906 11:50:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.166 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.424 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.683 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.942 11:50:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:40.201 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:40.459 /dev/nbd0 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:40.459 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.459 1+0 records in 00:08:40.460 1+0 records out 00:08:40.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250015 s, 16.4 MB/s 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:40.460 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:08:40.718 /dev/nbd1 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.718 1+0 records in 00:08:40.718 1+0 records out 00:08:40.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302604 s, 13.5 MB/s 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:40.718 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:08:40.977 /dev/nbd10 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.977 1+0 records in 00:08:40.977 1+0 records out 00:08:40.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233153 s, 17.6 MB/s 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:40.977 11:50:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:08:41.234 /dev/nbd11 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.234 1+0 records in 00:08:41.234 1+0 records out 00:08:41.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003387 s, 12.1 MB/s 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:41.234 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:08:41.493 /dev/nbd12 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.493 1+0 records in 00:08:41.493 1+0 records out 00:08:41.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299621 s, 13.7 MB/s 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:41.493 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:08:41.752 /dev/nbd13 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.752 1+0 records in 00:08:41.752 1+0 records out 00:08:41.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038026 s, 10.8 MB/s 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:41.752 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:08:42.010 /dev/nbd14 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.010 1+0 records in 00:08:42.010 1+0 records out 00:08:42.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404187 s, 10.1 MB/s 00:08:42.010 11:50:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.010 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.010 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.010 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.011 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.011 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.011 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:42.011 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:08:42.269 /dev/nbd15 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.269 1+0 records in 00:08:42.269 1+0 records out 00:08:42.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398068 s, 10.3 MB/s 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:42.269 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:08:42.527 /dev/nbd2 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.527 1+0 records in 00:08:42.527 1+0 records out 00:08:42.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557847 s, 7.3 MB/s 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:42.527 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:08:42.786 /dev/nbd3 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.786 1+0 records in 00:08:42.786 1+0 records out 00:08:42.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060637 s, 6.8 MB/s 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:42.786 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:08:43.045 /dev/nbd4 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:43.045 11:50:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.045 1+0 records in 00:08:43.045 1+0 records out 00:08:43.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485479 s, 8.4 MB/s 00:08:43.045 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.045 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:43.045 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.045 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:43.045 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:43.046 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.046 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:43.046 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:08:43.304 /dev/nbd5 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.304 1+0 records in 00:08:43.304 1+0 records out 00:08:43.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058072 s, 7.1 MB/s 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:43.304 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:08:43.563 /dev/nbd6 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.563 1+0 records in 00:08:43.563 1+0 records out 00:08:43.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510637 s, 8.0 MB/s 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:43.563 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:08:43.822 /dev/nbd7 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.822 1+0 records in 00:08:43.822 1+0 records out 00:08:43.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666901 s, 6.1 MB/s 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:43.822 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:08:44.081 /dev/nbd8 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.081 1+0 records in 00:08:44.081 1+0 records out 00:08:44.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761058 s, 5.4 MB/s 00:08:44.081 11:50:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:44.081 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:08:44.339 /dev/nbd9 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.339 1+0 records in 00:08:44.339 1+0 records out 00:08:44.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632794 s, 6.5 MB/s 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.339 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd0", 00:08:44.598 "bdev_name": "Malloc0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd1", 00:08:44.598 "bdev_name": "Malloc1p0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd10", 00:08:44.598 "bdev_name": "Malloc1p1" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd11", 00:08:44.598 "bdev_name": "Malloc2p0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd12", 00:08:44.598 "bdev_name": "Malloc2p1" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd13", 00:08:44.598 "bdev_name": "Malloc2p2" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd14", 00:08:44.598 "bdev_name": "Malloc2p3" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd15", 00:08:44.598 "bdev_name": "Malloc2p4" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd2", 00:08:44.598 "bdev_name": "Malloc2p5" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd3", 00:08:44.598 "bdev_name": "Malloc2p6" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd4", 00:08:44.598 "bdev_name": "Malloc2p7" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd5", 00:08:44.598 "bdev_name": "TestPT" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd6", 00:08:44.598 "bdev_name": "raid0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd7", 00:08:44.598 "bdev_name": "concat0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd8", 00:08:44.598 "bdev_name": "raid1" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd9", 00:08:44.598 "bdev_name": "AIO0" 00:08:44.598 } 00:08:44.598 ]' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd0", 00:08:44.598 "bdev_name": "Malloc0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd1", 00:08:44.598 "bdev_name": "Malloc1p0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd10", 00:08:44.598 "bdev_name": "Malloc1p1" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd11", 00:08:44.598 "bdev_name": "Malloc2p0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd12", 00:08:44.598 "bdev_name": "Malloc2p1" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd13", 00:08:44.598 "bdev_name": "Malloc2p2" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd14", 00:08:44.598 "bdev_name": "Malloc2p3" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd15", 00:08:44.598 "bdev_name": "Malloc2p4" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd2", 00:08:44.598 "bdev_name": "Malloc2p5" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd3", 00:08:44.598 "bdev_name": "Malloc2p6" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd4", 00:08:44.598 "bdev_name": "Malloc2p7" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd5", 00:08:44.598 "bdev_name": "TestPT" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd6", 00:08:44.598 "bdev_name": "raid0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd7", 00:08:44.598 "bdev_name": "concat0" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd8", 00:08:44.598 "bdev_name": "raid1" 00:08:44.598 }, 00:08:44.598 { 00:08:44.598 "nbd_device": "/dev/nbd9", 00:08:44.598 "bdev_name": "AIO0" 00:08:44.598 } 00:08:44.598 ]' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:44.598 /dev/nbd1 00:08:44.598 /dev/nbd10 00:08:44.598 /dev/nbd11 00:08:44.598 /dev/nbd12 00:08:44.598 /dev/nbd13 00:08:44.598 /dev/nbd14 00:08:44.598 /dev/nbd15 00:08:44.598 /dev/nbd2 00:08:44.598 /dev/nbd3 00:08:44.598 /dev/nbd4 00:08:44.598 /dev/nbd5 00:08:44.598 /dev/nbd6 00:08:44.598 /dev/nbd7 00:08:44.598 /dev/nbd8 00:08:44.598 /dev/nbd9' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:44.598 /dev/nbd1 00:08:44.598 /dev/nbd10 00:08:44.598 /dev/nbd11 00:08:44.598 /dev/nbd12 00:08:44.598 /dev/nbd13 00:08:44.598 /dev/nbd14 00:08:44.598 /dev/nbd15 00:08:44.598 /dev/nbd2 00:08:44.598 /dev/nbd3 00:08:44.598 /dev/nbd4 00:08:44.598 /dev/nbd5 00:08:44.598 /dev/nbd6 00:08:44.598 /dev/nbd7 00:08:44.598 /dev/nbd8 00:08:44.598 /dev/nbd9' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:44.598 256+0 records in 00:08:44.598 256+0 records out 00:08:44.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104837 s, 100 MB/s 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.598 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:44.856 256+0 records in 00:08:44.856 256+0 records out 00:08:44.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144437 s, 7.3 MB/s 00:08:44.856 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.856 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:44.856 256+0 records in 00:08:44.856 256+0 records out 00:08:44.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170125 s, 6.2 MB/s 00:08:44.856 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.856 11:50:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:45.113 256+0 records in 00:08:45.113 256+0 records out 00:08:45.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170332 s, 6.2 MB/s 00:08:45.113 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.113 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:45.371 256+0 records in 00:08:45.371 256+0 records out 00:08:45.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169827 s, 6.2 MB/s 00:08:45.371 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.371 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:45.371 256+0 records in 00:08:45.371 256+0 records out 00:08:45.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156574 s, 6.7 MB/s 00:08:45.371 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.371 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:45.629 256+0 records in 00:08:45.629 256+0 records out 00:08:45.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167312 s, 6.3 MB/s 00:08:45.629 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.629 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:45.629 256+0 records in 00:08:45.629 256+0 records out 00:08:45.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15629 s, 6.7 MB/s 00:08:45.629 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.629 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:08:45.887 256+0 records in 00:08:45.887 256+0 records out 00:08:45.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146222 s, 7.2 MB/s 00:08:45.887 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.887 11:50:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:08:46.144 256+0 records in 00:08:46.144 256+0 records out 00:08:46.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168278 s, 6.2 MB/s 00:08:46.144 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.144 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:08:46.144 256+0 records in 00:08:46.144 256+0 records out 00:08:46.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103523 s, 10.1 MB/s 00:08:46.145 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.145 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:08:46.403 256+0 records in 00:08:46.403 256+0 records out 00:08:46.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167521 s, 6.3 MB/s 00:08:46.403 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.403 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:08:46.403 256+0 records in 00:08:46.403 256+0 records out 00:08:46.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167957 s, 6.2 MB/s 00:08:46.662 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.662 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:08:46.662 256+0 records in 00:08:46.662 256+0 records out 00:08:46.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168861 s, 6.2 MB/s 00:08:46.662 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.662 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:08:46.963 256+0 records in 00:08:46.963 256+0 records out 00:08:46.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169597 s, 6.2 MB/s 00:08:46.963 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.963 11:50:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:08:46.963 256+0 records in 00:08:46.963 256+0 records out 00:08:46.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172063 s, 6.1 MB/s 00:08:46.963 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.963 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:08:47.240 256+0 records in 00:08:47.240 256+0 records out 00:08:47.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166938 s, 6.3 MB/s 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:47.240 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.241 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:47.241 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.241 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:47.498 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.498 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.498 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.498 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.498 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.498 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.499 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.499 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.499 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.499 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.757 11:50:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.016 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.276 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.535 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.794 11:50:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.053 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.312 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.571 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.830 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:50.089 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:50.089 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:50.089 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:50.089 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.089 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.089 11:50:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:50.089 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.089 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.089 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.090 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:50.348 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:50.348 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:50.348 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:50.348 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.348 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.349 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:50.349 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.349 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.349 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.349 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.607 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:50.608 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:50.608 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:50.608 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:50.608 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.608 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.608 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.867 11:50:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.126 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:51.385 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:51.644 malloc_lvol_verify 00:08:51.644 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:51.903 4c104741-f25e-4166-be63-53d4c2666d2a 00:08:51.903 11:50:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:52.471 42987d08-23f9-43f0-834b-ed47b8e52dde 00:08:52.471 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:52.730 /dev/nbd0 00:08:52.730 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:52.730 mke2fs 1.46.5 (30-Dec-2021) 00:08:52.730 Discarding device blocks: 0/4096 done 00:08:52.730 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:52.730 00:08:52.730 Allocating group tables: 0/1 done 00:08:52.730 Writing inode tables: 0/1 done 00:08:52.730 Creating journal (1024 blocks): done 00:08:52.730 Writing superblocks and filesystem accounting information: 0/1 done 00:08:52.730 00:08:52.730 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:52.730 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:52.730 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.730 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:52.731 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.731 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:52.731 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.731 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 4069839 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 4069839 ']' 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 4069839 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4069839 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4069839' 00:08:52.990 killing process with pid 4069839 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 4069839 00:08:52.990 11:50:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 4069839 00:08:53.250 11:50:39 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:53.250 00:08:53.250 real 0m22.731s 00:08:53.250 user 0m27.954s 00:08:53.250 sys 0m12.990s 00:08:53.250 11:50:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.250 11:50:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:53.250 ************************************ 00:08:53.250 END TEST bdev_nbd 00:08:53.250 ************************************ 00:08:53.250 11:50:39 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:53.250 11:50:39 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:08:53.250 11:50:39 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:08:53.250 11:50:39 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:08:53.250 11:50:39 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.250 11:50:39 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.250 11:50:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:53.250 ************************************ 00:08:53.250 START TEST bdev_fio 00:08:53.250 ************************************ 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:53.250 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:08:53.250 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:08:53.510 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.511 11:50:39 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:53.511 ************************************ 00:08:53.511 START TEST bdev_fio_rw_verify 00:08:53.511 ************************************ 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:53.511 11:50:39 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:54.079 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:54.079 fio-3.35 00:08:54.079 Starting 16 threads 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.079 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:54.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:54.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.080 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:06.298 00:09:06.298 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=4074522: Thu Jul 25 11:50:51 2024 00:09:06.298 read: IOPS=102k, BW=398MiB/s (417MB/s)(3980MiB/10001msec) 00:09:06.299 slat (nsec): min=1863, max=3731.9k, avg=31048.70, stdev=15603.64 00:09:06.299 clat (usec): min=10, max=3981, avg=256.03, stdev=135.79 00:09:06.299 lat (usec): min=19, max=4004, avg=287.08, stdev=143.99 00:09:06.299 clat percentiles (usec): 00:09:06.299 | 50.000th=[ 239], 99.000th=[ 619], 99.900th=[ 816], 99.990th=[ 906], 00:09:06.299 | 99.999th=[ 1254] 00:09:06.299 write: IOPS=161k, BW=630MiB/s (661MB/s)(6216MiB/9866msec); 0 zone resets 00:09:06.299 slat (usec): min=3, max=474, avg=42.89, stdev=15.22 00:09:06.299 clat (usec): min=11, max=2573, avg=299.94, stdev=148.01 00:09:06.299 lat (usec): min=31, max=2751, avg=342.83, stdev=155.82 00:09:06.299 clat percentiles (usec): 00:09:06.299 | 50.000th=[ 281], 99.000th=[ 701], 99.900th=[ 955], 99.990th=[ 1090], 00:09:06.299 | 99.999th=[ 1713] 00:09:06.299 bw ( KiB/s): min=504022, max=840494, per=99.44%, avg=641581.37, stdev=6763.85, samples=304 00:09:06.299 iops : min=126003, max=210123, avg=160393.21, stdev=1691.02, samples=304 00:09:06.299 lat (usec) : 20=0.01%, 50=1.00%, 100=6.54%, 250=39.31%, 500=44.83% 00:09:06.299 lat (usec) : 750=7.89%, 1000=0.39% 00:09:06.299 lat (msec) : 2=0.03%, 4=0.01% 00:09:06.299 cpu : usr=99.21%, sys=0.41%, ctx=659, majf=0, minf=2763 00:09:06.299 IO depths : 1=12.4%, 2=24.7%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.299 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.299 issued rwts: total=1018765,1591392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.299 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:06.299 00:09:06.299 Run status group 0 (all jobs): 00:09:06.299 READ: bw=398MiB/s (417MB/s), 398MiB/s-398MiB/s (417MB/s-417MB/s), io=3980MiB (4173MB), run=10001-10001msec 00:09:06.299 WRITE: bw=630MiB/s (661MB/s), 630MiB/s-630MiB/s (661MB/s-661MB/s), io=6216MiB (6518MB), run=9866-9866msec 00:09:06.299 00:09:06.299 real 0m12.171s 00:09:06.299 user 2m53.368s 00:09:06.299 sys 0m1.882s 00:09:06.299 11:50:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.299 11:50:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:09:06.299 ************************************ 00:09:06.299 END TEST bdev_fio_rw_verify 00:09:06.299 ************************************ 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:09:06.299 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:06.300 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "16964ba6-6b6e-4b91-b9d4-885b3a2061a2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "16964ba6-6b6e-4b91-b9d4-885b3a2061a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8062424f-e9c8-567d-90b4-25e1be2484af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8062424f-e9c8-567d-90b4-25e1be2484af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "4943d32c-7a27-5eda-a458-fe32a30ad1cc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4943d32c-7a27-5eda-a458-fe32a30ad1cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "08897617-f3e4-54b4-8083-11a9f12c8bd6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "08897617-f3e4-54b4-8083-11a9f12c8bd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "705d23a9-1422-58d5-93fc-0eb31523f6bb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "705d23a9-1422-58d5-93fc-0eb31523f6bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "49b497b9-aa62-5cae-bc70-850778b7257b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "49b497b9-aa62-5cae-bc70-850778b7257b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "856f20c5-c6a1-5e85-8b30-0bc5b45fa38c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "856f20c5-c6a1-5e85-8b30-0bc5b45fa38c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7e480c28-62df-5616-bb0e-6c888af5e47b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7e480c28-62df-5616-bb0e-6c888af5e47b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e8f48de5-3f93-519c-ba85-a647b5687cdb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8f48de5-3f93-519c-ba85-a647b5687cdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "3405e053-38ae-54ba-803c-a328572b2f22"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3405e053-38ae-54ba-803c-a328572b2f22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "88283af2-286e-5396-beca-3dff349fd679"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "88283af2-286e-5396-beca-3dff349fd679",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "54355e8d-91c2-59f5-b50a-1e0dbd2e3287"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "54355e8d-91c2-59f5-b50a-1e0dbd2e3287",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "fe3347b7-9cff-4278-a174-84dab60193fb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fe3347b7-9cff-4278-a174-84dab60193fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fe3347b7-9cff-4278-a174-84dab60193fb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4e012245-ae0c-4a65-b1ac-ba4f53df4f11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "bc65f017-9a0f-42e0-8f5d-06fa50afb5ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "306df3c8-1e26-4c24-988b-ba3af8ffedf4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "306df3c8-1e26-4c24-988b-ba3af8ffedf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "306df3c8-1e26-4c24-988b-ba3af8ffedf4",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b97a1664-def0-47bb-b9e5-39f0b33999e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b145967a-cd56-49bd-afff-5aaba78d4909",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ab73a78b-ed31-4e80-bc77-a82d148f0a83"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ab73a78b-ed31-4e80-bc77-a82d148f0a83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ab73a78b-ed31-4e80-bc77-a82d148f0a83",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d0875e5d-b742-4b99-8194-e3c0311d47da",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "415d5740-7c36-4d10-bc01-e67f61a33dd1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "59a636f4-9d5a-40c7-9f31-62dae7bd93bc"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "59a636f4-9d5a-40c7-9f31-62dae7bd93bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:06.300 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:09:06.300 Malloc1p0 00:09:06.300 Malloc1p1 00:09:06.300 Malloc2p0 00:09:06.300 Malloc2p1 00:09:06.300 Malloc2p2 00:09:06.300 Malloc2p3 00:09:06.300 Malloc2p4 00:09:06.300 Malloc2p5 00:09:06.300 Malloc2p6 00:09:06.300 Malloc2p7 00:09:06.300 TestPT 00:09:06.300 raid0 00:09:06.300 concat0 ]] 00:09:06.300 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "16964ba6-6b6e-4b91-b9d4-885b3a2061a2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "16964ba6-6b6e-4b91-b9d4-885b3a2061a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8062424f-e9c8-567d-90b4-25e1be2484af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8062424f-e9c8-567d-90b4-25e1be2484af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "4943d32c-7a27-5eda-a458-fe32a30ad1cc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4943d32c-7a27-5eda-a458-fe32a30ad1cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "08897617-f3e4-54b4-8083-11a9f12c8bd6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "08897617-f3e4-54b4-8083-11a9f12c8bd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "705d23a9-1422-58d5-93fc-0eb31523f6bb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "705d23a9-1422-58d5-93fc-0eb31523f6bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "49b497b9-aa62-5cae-bc70-850778b7257b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "49b497b9-aa62-5cae-bc70-850778b7257b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "856f20c5-c6a1-5e85-8b30-0bc5b45fa38c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "856f20c5-c6a1-5e85-8b30-0bc5b45fa38c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7e480c28-62df-5616-bb0e-6c888af5e47b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7e480c28-62df-5616-bb0e-6c888af5e47b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e8f48de5-3f93-519c-ba85-a647b5687cdb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8f48de5-3f93-519c-ba85-a647b5687cdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "3405e053-38ae-54ba-803c-a328572b2f22"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3405e053-38ae-54ba-803c-a328572b2f22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "88283af2-286e-5396-beca-3dff349fd679"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "88283af2-286e-5396-beca-3dff349fd679",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "54355e8d-91c2-59f5-b50a-1e0dbd2e3287"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "54355e8d-91c2-59f5-b50a-1e0dbd2e3287",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "fe3347b7-9cff-4278-a174-84dab60193fb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fe3347b7-9cff-4278-a174-84dab60193fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fe3347b7-9cff-4278-a174-84dab60193fb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4e012245-ae0c-4a65-b1ac-ba4f53df4f11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "bc65f017-9a0f-42e0-8f5d-06fa50afb5ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "306df3c8-1e26-4c24-988b-ba3af8ffedf4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "306df3c8-1e26-4c24-988b-ba3af8ffedf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "306df3c8-1e26-4c24-988b-ba3af8ffedf4",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b97a1664-def0-47bb-b9e5-39f0b33999e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b145967a-cd56-49bd-afff-5aaba78d4909",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ab73a78b-ed31-4e80-bc77-a82d148f0a83"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ab73a78b-ed31-4e80-bc77-a82d148f0a83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ab73a78b-ed31-4e80-bc77-a82d148f0a83",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d0875e5d-b742-4b99-8194-e3c0311d47da",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "415d5740-7c36-4d10-bc01-e67f61a33dd1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "59a636f4-9d5a-40c7-9f31-62dae7bd93bc"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "59a636f4-9d5a-40c7-9f31-62dae7bd93bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.302 11:50:51 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:06.302 ************************************ 00:09:06.302 START TEST bdev_fio_trim 00:09:06.302 ************************************ 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:09:06.302 11:50:51 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:06.302 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.302 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.302 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.302 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:06.303 fio-3.35 00:09:06.303 Starting 14 threads 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:06.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.303 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:18.593 00:09:18.594 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=4076797: Thu Jul 25 11:51:03 2024 00:09:18.594 write: IOPS=133k, BW=521MiB/s (546MB/s)(5207MiB/10001msec); 0 zone resets 00:09:18.594 slat (nsec): min=1697, max=612410, avg=37547.75, stdev=9801.25 00:09:18.594 clat (usec): min=15, max=3744, avg=260.06, stdev=88.04 00:09:18.594 lat (usec): min=20, max=3772, avg=297.60, stdev=91.35 00:09:18.594 clat percentiles (usec): 00:09:18.594 | 50.000th=[ 251], 99.000th=[ 490], 99.900th=[ 586], 99.990th=[ 701], 00:09:18.594 | 99.999th=[ 1418] 00:09:18.594 bw ( KiB/s): min=478965, max=667071, per=100.00%, avg=533491.58, stdev=3484.03, samples=266 00:09:18.594 iops : min=119741, max=166767, avg=133372.84, stdev=871.00, samples=266 00:09:18.594 trim: IOPS=133k, BW=521MiB/s (546MB/s)(5207MiB/10001msec); 0 zone resets 00:09:18.594 slat (usec): min=3, max=206, avg=25.63, stdev= 6.72 00:09:18.594 clat (usec): min=2, max=3772, avg=296.98, stdev=92.02 00:09:18.594 lat (usec): min=7, max=3790, avg=322.61, stdev=94.57 00:09:18.594 clat percentiles (usec): 00:09:18.594 | 50.000th=[ 289], 99.000th=[ 545], 99.900th=[ 644], 99.990th=[ 709], 00:09:18.594 | 99.999th=[ 1188] 00:09:18.594 bw ( KiB/s): min=478965, max=667071, per=100.00%, avg=533491.58, stdev=3484.03, samples=266 00:09:18.594 iops : min=119741, max=166767, avg=133372.84, stdev=871.00, samples=266 00:09:18.594 lat (usec) : 4=0.01%, 10=0.01%, 20=0.02%, 50=0.07%, 100=0.68% 00:09:18.594 lat (usec) : 250=41.08%, 500=56.81%, 750=1.33%, 1000=0.01% 00:09:18.594 lat (msec) : 2=0.01%, 4=0.01% 00:09:18.594 cpu : usr=99.62%, sys=0.00%, ctx=545, majf=0, minf=1012 00:09:18.594 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.594 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.594 issued rwts: total=0,1333101,1333103,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.594 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:18.594 00:09:18.594 Run status group 0 (all jobs): 00:09:18.594 WRITE: bw=521MiB/s (546MB/s), 521MiB/s-521MiB/s (546MB/s-546MB/s), io=5207MiB (5460MB), run=10001-10001msec 00:09:18.594 TRIM: bw=521MiB/s (546MB/s), 521MiB/s-521MiB/s (546MB/s-546MB/s), io=5207MiB (5460MB), run=10001-10001msec 00:09:18.594 00:09:18.594 real 0m11.490s 00:09:18.594 user 2m33.552s 00:09:18.594 sys 0m0.783s 00:09:18.594 11:51:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.594 11:51:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:09:18.594 ************************************ 00:09:18.594 END TEST bdev_fio_trim 00:09:18.594 ************************************ 00:09:18.594 11:51:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:09:18.594 11:51:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:18.594 11:51:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:09:18.594 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:18.594 11:51:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:09:18.594 00:09:18.594 real 0m24.052s 00:09:18.594 user 5m27.127s 00:09:18.594 sys 0m2.882s 00:09:18.594 11:51:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.594 11:51:03 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:18.594 ************************************ 00:09:18.594 END TEST bdev_fio 00:09:18.594 ************************************ 00:09:18.594 11:51:03 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:18.594 11:51:03 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:18.594 11:51:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:18.594 11:51:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.594 11:51:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:18.594 ************************************ 00:09:18.594 START TEST bdev_verify 00:09:18.594 ************************************ 00:09:18.594 11:51:03 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:18.594 [2024-07-25 11:51:03.497996] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:18.594 [2024-07-25 11:51:03.498050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078813 ] 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:18.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.594 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:18.594 [2024-07-25 11:51:03.628811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.594 [2024-07-25 11:51:03.712863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.594 [2024-07-25 11:51:03.712869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.594 [2024-07-25 11:51:03.856389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:18.594 [2024-07-25 11:51:03.856443] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:18.594 [2024-07-25 11:51:03.856456] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:18.594 [2024-07-25 11:51:03.864400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:18.594 [2024-07-25 11:51:03.864426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:18.594 [2024-07-25 11:51:03.872416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:18.594 [2024-07-25 11:51:03.872439] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:18.594 [2024-07-25 11:51:03.943912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:18.594 [2024-07-25 11:51:03.943959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.594 [2024-07-25 11:51:03.943975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25dd450 00:09:18.594 [2024-07-25 11:51:03.943986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.594 [2024-07-25 11:51:03.945275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.594 [2024-07-25 11:51:03.945301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:18.594 Running I/O for 5 seconds... 00:09:23.866 00:09:23.866 Latency(us) 00:09:23.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.866 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x1000 00:09:23.866 Malloc0 : 5.17 1162.96 4.54 0.00 0.00 109831.25 550.50 394264.58 00:09:23.866 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x1000 length 0x1000 00:09:23.866 Malloc0 : 5.16 1165.99 4.55 0.00 0.00 109544.25 507.90 395942.30 00:09:23.866 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x800 00:09:23.866 Malloc1p0 : 5.22 612.97 2.39 0.00 0.00 207689.36 3355.44 228170.14 00:09:23.866 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x800 length 0x800 00:09:23.866 Malloc1p0 : 5.21 614.50 2.40 0.00 0.00 207173.64 3381.66 228170.14 00:09:23.866 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x800 00:09:23.866 Malloc1p1 : 5.22 612.56 2.39 0.00 0.00 207219.35 3342.34 223136.97 00:09:23.866 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x800 length 0x800 00:09:23.866 Malloc1p1 : 5.21 614.27 2.40 0.00 0.00 206644.20 3355.44 223136.97 00:09:23.866 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p0 : 5.23 612.17 2.39 0.00 0.00 206734.63 3381.66 218103.81 00:09:23.866 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p0 : 5.21 614.03 2.40 0.00 0.00 206100.55 3381.66 218103.81 00:09:23.866 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p1 : 5.23 611.78 2.39 0.00 0.00 206259.00 3381.66 213070.64 00:09:23.866 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p1 : 5.21 613.79 2.40 0.00 0.00 205569.97 3434.09 214748.36 00:09:23.866 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p2 : 5.23 611.40 2.39 0.00 0.00 205772.55 3342.34 207198.62 00:09:23.866 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p2 : 5.22 613.54 2.40 0.00 0.00 205038.20 3355.44 208876.34 00:09:23.866 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p3 : 5.24 611.18 2.39 0.00 0.00 205219.69 3355.44 201326.59 00:09:23.866 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p3 : 5.22 613.29 2.40 0.00 0.00 204493.65 3407.87 202165.45 00:09:23.866 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p4 : 5.24 610.95 2.39 0.00 0.00 204700.93 3303.01 196293.43 00:09:23.866 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p4 : 5.22 613.02 2.39 0.00 0.00 203974.74 3355.44 197132.29 00:09:23.866 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p5 : 5.24 610.73 2.39 0.00 0.00 204192.66 3316.12 192099.12 00:09:23.866 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p5 : 5.22 612.61 2.39 0.00 0.00 203520.12 3342.34 192937.98 00:09:23.866 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x0 length 0x200 00:09:23.866 Malloc2p6 : 5.24 610.51 2.38 0.00 0.00 203664.13 3407.87 186227.10 00:09:23.866 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.866 Verification LBA range: start 0x200 length 0x200 00:09:23.866 Malloc2p6 : 5.23 612.22 2.39 0.00 0.00 203056.08 3342.34 187065.96 00:09:23.867 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x0 length 0x200 00:09:23.867 Malloc2p7 : 5.24 610.28 2.38 0.00 0.00 203124.43 3381.66 182032.79 00:09:23.867 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x200 length 0x200 00:09:23.867 Malloc2p7 : 5.23 611.83 2.39 0.00 0.00 202579.10 3381.66 182032.79 00:09:23.867 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x0 length 0x1000 00:09:23.867 TestPT : 5.27 607.70 2.37 0.00 0.00 203225.27 18979.23 183710.52 00:09:23.867 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x1000 length 0x1000 00:09:23.867 TestPT : 5.26 608.36 2.38 0.00 0.00 202996.32 20552.09 182032.79 00:09:23.867 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x0 length 0x2000 00:09:23.867 raid0 : 5.25 609.78 2.38 0.00 0.00 201710.39 3185.05 156028.11 00:09:23.867 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x2000 length 0x2000 00:09:23.867 raid0 : 5.27 631.24 2.47 0.00 0.00 194953.04 3185.05 159383.55 00:09:23.867 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x0 length 0x2000 00:09:23.867 concat0 : 5.25 609.56 2.38 0.00 0.00 201213.06 3237.48 151833.80 00:09:23.867 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x2000 length 0x2000 00:09:23.867 concat0 : 5.27 631.02 2.46 0.00 0.00 194474.78 3250.59 154350.39 00:09:23.867 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x0 length 0x1000 00:09:23.867 raid1 : 5.27 631.14 2.47 0.00 0.00 193818.71 2516.58 153511.53 00:09:23.867 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x1000 length 0x1000 00:09:23.867 raid1 : 5.28 630.66 2.46 0.00 0.00 194027.46 3932.16 152672.67 00:09:23.867 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x0 length 0x4e2 00:09:23.867 AIO0 : 5.27 630.96 2.46 0.00 0.00 193331.17 1638.40 161900.13 00:09:23.867 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.867 Verification LBA range: start 0x4e2 length 0x4e2 00:09:23.867 AIO0 : 5.28 630.50 2.46 0.00 0.00 193530.45 2293.76 161061.27 00:09:23.867 =================================================================================================================== 00:09:23.867 Total : 20797.51 81.24 0.00 0.00 192205.85 507.90 395942.30 00:09:23.867 00:09:23.867 real 0m6.428s 00:09:23.867 user 0m11.948s 00:09:23.867 sys 0m0.375s 00:09:23.867 11:51:09 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.867 11:51:09 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:23.867 ************************************ 00:09:23.867 END TEST bdev_verify 00:09:23.867 ************************************ 00:09:23.867 11:51:09 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.867 11:51:09 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:23.867 11:51:09 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.867 11:51:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:23.867 ************************************ 00:09:23.867 START TEST bdev_verify_big_io 00:09:23.867 ************************************ 00:09:23.867 11:51:09 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:24.126 [2024-07-25 11:51:10.016430] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:24.126 [2024-07-25 11:51:10.016488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080316 ] 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.126 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:24.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:24.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.127 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:24.127 [2024-07-25 11:51:10.149876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.127 [2024-07-25 11:51:10.235307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.127 [2024-07-25 11:51:10.235313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.386 [2024-07-25 11:51:10.376175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:24.386 [2024-07-25 11:51:10.376227] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:24.386 [2024-07-25 11:51:10.376240] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:24.386 [2024-07-25 11:51:10.384183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:24.386 [2024-07-25 11:51:10.384207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:24.386 [2024-07-25 11:51:10.392193] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:24.386 [2024-07-25 11:51:10.392215] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:24.386 [2024-07-25 11:51:10.463597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:24.386 [2024-07-25 11:51:10.463642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.386 [2024-07-25 11:51:10.463657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d0450 00:09:24.386 [2024-07-25 11:51:10.463669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.386 [2024-07-25 11:51:10.464948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.386 [2024-07-25 11:51:10.464975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:24.645 [2024-07-25 11:51:10.625294] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.626466] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.628186] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.629341] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.631117] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.632348] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.634125] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.635602] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.636502] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.637862] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.638764] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.640126] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.641027] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.642410] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.643312] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.644662] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:09:24.645 [2024-07-25 11:51:10.665259] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:09:24.645 [2024-07-25 11:51:10.666977] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:09:24.645 Running I/O for 5 seconds... 00:09:32.767 00:09:32.767 Latency(us) 00:09:32.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.767 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x100 00:09:32.767 Malloc0 : 5.59 183.18 11.45 0.00 0.00 685866.24 799.54 2040109.47 00:09:32.767 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x100 length 0x100 00:09:32.767 Malloc0 : 6.78 226.40 14.15 0.00 0.00 419634.63 786.43 597268.89 00:09:32.767 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x80 00:09:32.767 Malloc1p0 : 6.41 39.96 2.50 0.00 0.00 2931638.54 1343.49 4912368.84 00:09:32.767 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x80 length 0x80 00:09:32.767 Malloc1p0 : 6.50 84.37 5.27 0.00 0.00 1404784.43 2202.01 2805150.52 00:09:32.767 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x80 00:09:32.767 Malloc1p1 : 6.50 41.86 2.62 0.00 0.00 2728463.02 1323.83 4778151.12 00:09:32.767 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x80 length 0x80 00:09:32.767 Malloc1p1 : 6.79 35.32 2.21 0.00 0.00 3344969.66 1376.26 5529770.39 00:09:32.767 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p0 : 6.09 28.90 1.81 0.00 0.00 987214.84 652.08 1711276.03 00:09:32.767 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p0 : 6.27 22.95 1.43 0.00 0.00 1261408.29 586.55 2093796.56 00:09:32.767 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p1 : 6.09 28.89 1.81 0.00 0.00 979487.03 760.22 1691143.37 00:09:32.767 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p1 : 6.27 22.95 1.43 0.00 0.00 1250049.76 593.10 2066953.01 00:09:32.767 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p2 : 6.09 28.89 1.81 0.00 0.00 970962.75 593.10 1664299.83 00:09:32.767 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p2 : 6.28 22.95 1.43 0.00 0.00 1239868.18 589.82 2040109.47 00:09:32.767 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p3 : 6.09 28.88 1.81 0.00 0.00 962417.30 579.99 1644167.17 00:09:32.767 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p3 : 6.50 24.63 1.54 0.00 0.00 1164019.72 688.13 2026687.69 00:09:32.767 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p4 : 6.19 31.02 1.94 0.00 0.00 899145.58 579.99 1617323.62 00:09:32.767 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p4 : 6.50 24.62 1.54 0.00 0.00 1154838.44 756.94 1999844.15 00:09:32.767 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p5 : 6.19 31.01 1.94 0.00 0.00 891660.28 566.89 1597190.96 00:09:32.767 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p5 : 6.50 24.61 1.54 0.00 0.00 1146260.41 612.76 1973000.60 00:09:32.767 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p6 : 6.19 31.00 1.94 0.00 0.00 884165.03 570.16 1570347.42 00:09:32.767 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p6 : 6.50 24.60 1.54 0.00 0.00 1136193.65 586.55 1946157.06 00:09:32.767 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x20 00:09:32.767 Malloc2p7 : 6.19 31.00 1.94 0.00 0.00 876815.49 553.78 1550214.76 00:09:32.767 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x20 length 0x20 00:09:32.767 Malloc2p7 : 6.50 24.60 1.54 0.00 0.00 1127474.62 645.53 1932735.28 00:09:32.767 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x100 00:09:32.767 TestPT : 6.62 43.53 2.72 0.00 0.00 2377983.60 1343.49 4429185.02 00:09:32.767 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x100 length 0x100 00:09:32.767 TestPT : 6.82 35.48 2.22 0.00 0.00 3076625.98 187904.82 4187593.11 00:09:32.767 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x200 00:09:32.767 raid0 : 6.41 47.43 2.96 0.00 0.00 2161563.72 1461.45 4268123.75 00:09:32.767 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x200 length 0x200 00:09:32.767 raid0 : 6.82 37.54 2.35 0.00 0.00 2812491.33 1448.35 4939212.39 00:09:32.767 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x200 00:09:32.767 concat0 : 6.62 50.78 3.17 0.00 0.00 1922789.91 1441.79 4107062.48 00:09:32.767 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x200 length 0x200 00:09:32.767 concat0 : 6.78 41.17 2.57 0.00 0.00 2492382.79 1454.90 4751307.57 00:09:32.767 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x0 length 0x100 00:09:32.767 raid1 : 6.67 67.19 4.20 0.00 0.00 1456885.09 1821.90 3946001.20 00:09:32.767 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:32.767 Verification LBA range: start 0x100 length 0x100 00:09:32.768 raid1 : 6.79 42.42 2.65 0.00 0.00 2366099.27 1874.33 4590246.30 00:09:32.768 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:09:32.768 Verification LBA range: start 0x0 length 0x4e 00:09:32.768 AIO0 : 6.72 70.52 4.41 0.00 0.00 828235.00 734.00 2335388.47 00:09:32.768 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:09:32.768 Verification LBA range: start 0x4e length 0x4e 00:09:32.768 AIO0 : 6.79 39.76 2.49 0.00 0.00 1500102.82 737.28 2939368.24 00:09:32.768 =================================================================================================================== 00:09:32.768 Total : 1518.41 94.90 0.00 0.00 1384226.98 553.78 5529770.39 00:09:32.768 00:09:32.768 real 0m7.961s 00:09:32.768 user 0m15.015s 00:09:32.768 sys 0m0.383s 00:09:32.768 11:51:17 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.768 11:51:17 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 ************************************ 00:09:32.768 END TEST bdev_verify_big_io 00:09:32.768 ************************************ 00:09:32.768 11:51:17 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:32.768 11:51:17 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:32.768 11:51:17 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.768 11:51:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 ************************************ 00:09:32.768 START TEST bdev_write_zeroes 00:09:32.768 ************************************ 00:09:32.768 11:51:18 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:32.768 [2024-07-25 11:51:18.065471] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:32.768 [2024-07-25 11:51:18.065531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081653 ] 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:32.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.768 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:32.768 [2024-07-25 11:51:18.196711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.768 [2024-07-25 11:51:18.279227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.768 [2024-07-25 11:51:18.433971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:32.768 [2024-07-25 11:51:18.434017] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:32.768 [2024-07-25 11:51:18.434030] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:32.768 [2024-07-25 11:51:18.441981] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:32.768 [2024-07-25 11:51:18.442006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:32.768 [2024-07-25 11:51:18.449992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:32.768 [2024-07-25 11:51:18.450014] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:32.768 [2024-07-25 11:51:18.521221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:32.768 [2024-07-25 11:51:18.521269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.768 [2024-07-25 11:51:18.521284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21823d0 00:09:32.768 [2024-07-25 11:51:18.521295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.768 [2024-07-25 11:51:18.522725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.768 [2024-07-25 11:51:18.522752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:32.768 Running I/O for 1 seconds... 00:09:33.707 00:09:33.707 Latency(us) 00:09:33.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.707 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc0 : 1.04 5397.07 21.08 0.00 0.00 23706.78 612.76 39636.17 00:09:33.707 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc1p0 : 1.04 5389.75 21.05 0.00 0.00 23700.52 835.58 38797.31 00:09:33.707 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc1p1 : 1.05 5382.71 21.03 0.00 0.00 23678.70 838.86 37958.45 00:09:33.707 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p0 : 1.05 5375.62 21.00 0.00 0.00 23661.86 829.03 37119.59 00:09:33.707 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p1 : 1.05 5368.62 20.97 0.00 0.00 23642.63 822.48 36280.73 00:09:33.707 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p2 : 1.05 5361.64 20.94 0.00 0.00 23625.10 825.75 35441.87 00:09:33.707 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p3 : 1.05 5354.60 20.92 0.00 0.00 23609.50 825.75 34603.01 00:09:33.707 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p4 : 1.05 5347.66 20.89 0.00 0.00 23591.88 825.75 33764.15 00:09:33.707 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p5 : 1.05 5340.73 20.86 0.00 0.00 23574.82 822.48 32925.29 00:09:33.707 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p6 : 1.06 5333.71 20.83 0.00 0.00 23558.68 822.48 32086.43 00:09:33.707 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 Malloc2p7 : 1.06 5326.84 20.81 0.00 0.00 23540.59 825.75 31247.56 00:09:33.707 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 TestPT : 1.06 5319.97 20.78 0.00 0.00 23525.62 858.52 30408.70 00:09:33.707 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 raid0 : 1.06 5312.01 20.75 0.00 0.00 23501.72 1500.77 28940.70 00:09:33.707 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 concat0 : 1.06 5304.22 20.72 0.00 0.00 23456.46 1487.67 27472.69 00:09:33.707 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 raid1 : 1.06 5294.52 20.68 0.00 0.00 23400.04 2372.40 25060.97 00:09:33.707 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.707 AIO0 : 1.06 5288.61 20.66 0.00 0.00 23322.43 950.27 24326.96 00:09:33.707 =================================================================================================================== 00:09:33.707 Total : 85498.27 333.98 0.00 0.00 23568.58 612.76 39636.17 00:09:34.277 00:09:34.277 real 0m2.129s 00:09:34.277 user 0m1.724s 00:09:34.277 sys 0m0.334s 00:09:34.277 11:51:20 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.277 11:51:20 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:34.277 ************************************ 00:09:34.277 END TEST bdev_write_zeroes 00:09:34.277 ************************************ 00:09:34.277 11:51:20 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.277 11:51:20 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:34.277 11:51:20 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.277 11:51:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:34.277 ************************************ 00:09:34.277 START TEST bdev_json_nonenclosed 00:09:34.277 ************************************ 00:09:34.277 11:51:20 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.277 [2024-07-25 11:51:20.278986] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:34.277 [2024-07-25 11:51:20.279041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082011 ] 00:09:34.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.277 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:34.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.277 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:34.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:34.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.278 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:34.538 [2024-07-25 11:51:20.398652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.538 [2024-07-25 11:51:20.482362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.538 [2024-07-25 11:51:20.482426] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:34.538 [2024-07-25 11:51:20.482443] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:34.538 [2024-07-25 11:51:20.482454] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.538 00:09:34.538 real 0m0.348s 00:09:34.538 user 0m0.197s 00:09:34.538 sys 0m0.148s 00:09:34.538 11:51:20 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.538 11:51:20 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:34.538 ************************************ 00:09:34.538 END TEST bdev_json_nonenclosed 00:09:34.538 ************************************ 00:09:34.538 11:51:20 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.538 11:51:20 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:34.538 11:51:20 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.538 11:51:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:34.799 ************************************ 00:09:34.799 START TEST bdev_json_nonarray 00:09:34.799 ************************************ 00:09:34.799 11:51:20 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.799 [2024-07-25 11:51:20.715533] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:34.799 [2024-07-25 11:51:20.715591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082217 ] 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:34.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.799 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:34.799 [2024-07-25 11:51:20.846370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.059 [2024-07-25 11:51:20.928831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.059 [2024-07-25 11:51:20.928896] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:35.059 [2024-07-25 11:51:20.928912] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:35.059 [2024-07-25 11:51:20.928923] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.059 00:09:35.059 real 0m0.356s 00:09:35.059 user 0m0.212s 00:09:35.059 sys 0m0.142s 00:09:35.059 11:51:21 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.059 11:51:21 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:35.059 ************************************ 00:09:35.059 END TEST bdev_json_nonarray 00:09:35.059 ************************************ 00:09:35.059 11:51:21 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:09:35.059 11:51:21 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:09:35.059 11:51:21 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.059 11:51:21 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.059 11:51:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:35.059 ************************************ 00:09:35.059 START TEST bdev_qos 00:09:35.059 ************************************ 00:09:35.059 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:09:35.059 11:51:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=4082242 00:09:35.059 11:51:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 4082242' 00:09:35.059 Process qos testing pid: 4082242 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 4082242 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 4082242 ']' 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.060 11:51:21 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 [2024-07-25 11:51:21.158773] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:35.060 [2024-07-25 11:51:21.158828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082242 ] 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:35.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.320 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:35.320 [2024-07-25 11:51:21.278526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.320 [2024-07-25 11:51:21.363400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 Malloc_0 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 [ 00:09:36.258 { 00:09:36.258 "name": "Malloc_0", 00:09:36.258 "aliases": [ 00:09:36.258 "43d4bf1c-f260-42ff-96ef-7029cc1ff78d" 00:09:36.258 ], 00:09:36.258 "product_name": "Malloc disk", 00:09:36.258 "block_size": 512, 00:09:36.258 "num_blocks": 262144, 00:09:36.258 "uuid": "43d4bf1c-f260-42ff-96ef-7029cc1ff78d", 00:09:36.258 "assigned_rate_limits": { 00:09:36.258 "rw_ios_per_sec": 0, 00:09:36.258 "rw_mbytes_per_sec": 0, 00:09:36.258 "r_mbytes_per_sec": 0, 00:09:36.258 "w_mbytes_per_sec": 0 00:09:36.258 }, 00:09:36.258 "claimed": false, 00:09:36.258 "zoned": false, 00:09:36.258 "supported_io_types": { 00:09:36.258 "read": true, 00:09:36.258 "write": true, 00:09:36.258 "unmap": true, 00:09:36.258 "flush": true, 00:09:36.258 "reset": true, 00:09:36.258 "nvme_admin": false, 00:09:36.258 "nvme_io": false, 00:09:36.258 "nvme_io_md": false, 00:09:36.258 "write_zeroes": true, 00:09:36.258 "zcopy": true, 00:09:36.258 "get_zone_info": false, 00:09:36.258 "zone_management": false, 00:09:36.258 "zone_append": false, 00:09:36.258 "compare": false, 00:09:36.258 "compare_and_write": false, 00:09:36.258 "abort": true, 00:09:36.258 "seek_hole": false, 00:09:36.258 "seek_data": false, 00:09:36.258 "copy": true, 00:09:36.258 "nvme_iov_md": false 00:09:36.258 }, 00:09:36.258 "memory_domains": [ 00:09:36.258 { 00:09:36.258 "dma_device_id": "system", 00:09:36.258 "dma_device_type": 1 00:09:36.258 }, 00:09:36.258 { 00:09:36.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.258 "dma_device_type": 2 00:09:36.258 } 00:09:36.258 ], 00:09:36.258 "driver_specific": {} 00:09:36.258 } 00:09:36.258 ] 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 Null_1 00:09:36.258 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.259 [ 00:09:36.259 { 00:09:36.259 "name": "Null_1", 00:09:36.259 "aliases": [ 00:09:36.259 "b86ab2f1-4f7a-4fb9-b4b6-e97c10b76944" 00:09:36.259 ], 00:09:36.259 "product_name": "Null disk", 00:09:36.259 "block_size": 512, 00:09:36.259 "num_blocks": 262144, 00:09:36.259 "uuid": "b86ab2f1-4f7a-4fb9-b4b6-e97c10b76944", 00:09:36.259 "assigned_rate_limits": { 00:09:36.259 "rw_ios_per_sec": 0, 00:09:36.259 "rw_mbytes_per_sec": 0, 00:09:36.259 "r_mbytes_per_sec": 0, 00:09:36.259 "w_mbytes_per_sec": 0 00:09:36.259 }, 00:09:36.259 "claimed": false, 00:09:36.259 "zoned": false, 00:09:36.259 "supported_io_types": { 00:09:36.259 "read": true, 00:09:36.259 "write": true, 00:09:36.259 "unmap": false, 00:09:36.259 "flush": false, 00:09:36.259 "reset": true, 00:09:36.259 "nvme_admin": false, 00:09:36.259 "nvme_io": false, 00:09:36.259 "nvme_io_md": false, 00:09:36.259 "write_zeroes": true, 00:09:36.259 "zcopy": false, 00:09:36.259 "get_zone_info": false, 00:09:36.259 "zone_management": false, 00:09:36.259 "zone_append": false, 00:09:36.259 "compare": false, 00:09:36.259 "compare_and_write": false, 00:09:36.259 "abort": true, 00:09:36.259 "seek_hole": false, 00:09:36.259 "seek_data": false, 00:09:36.259 "copy": false, 00:09:36.259 "nvme_iov_md": false 00:09:36.259 }, 00:09:36.259 "driver_specific": {} 00:09:36.259 } 00:09:36.259 ] 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:09:36.259 11:51:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:09:36.259 Running I/O for 60 seconds... 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 68113.84 272455.37 0.00 0.00 275456.00 0.00 0.00 ' 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=68113.84 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 68113 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=68113 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=17000 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 17000 -gt 1000 ']' 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.535 11:51:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:41.535 ************************************ 00:09:41.535 START TEST bdev_qos_iops 00:09:41.535 ************************************ 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 17000 IOPS Malloc_0 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=17000 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:09:41.535 11:51:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 16995.95 67983.81 0.00 0.00 69360.00 0.00 0.00 ' 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=16995.95 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 16995 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=16995 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=15300 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=18700 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16995 -lt 15300 ']' 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16995 -gt 18700 ']' 00:09:46.868 00:09:46.868 real 0m5.244s 00:09:46.868 user 0m0.107s 00:09:46.868 sys 0m0.047s 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.868 11:51:32 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:09:46.868 ************************************ 00:09:46.868 END TEST bdev_qos_iops 00:09:46.868 ************************************ 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:09:46.868 11:51:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:09:52.140 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 21337.75 85351.00 0.00 0.00 87040.00 0.00 0.00 ' 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=87040.00 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 87040 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=87040 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.141 11:51:37 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:52.141 ************************************ 00:09:52.141 START TEST bdev_qos_bw 00:09:52.141 ************************************ 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 8 BANDWIDTH Null_1 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:09:52.141 11:51:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2046.62 8186.46 0.00 0.00 8304.00 0.00 0.00 ' 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8304.00 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8304 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8304 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8304 -lt 7372 ']' 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8304 -gt 9011 ']' 00:09:57.416 00:09:57.416 real 0m5.237s 00:09:57.416 user 0m0.119s 00:09:57.416 sys 0m0.037s 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:57.416 ************************************ 00:09:57.416 END TEST bdev_qos_bw 00:09:57.416 ************************************ 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.416 11:51:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:57.416 ************************************ 00:09:57.416 START TEST bdev_qos_ro_bw 00:09:57.416 ************************************ 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:09:57.416 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:57.417 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:09:57.417 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:57.417 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:57.417 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:09:57.417 11:51:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.85 2047.41 0.00 0.00 2060.00 0.00 0.00 ' 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2060.00 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2060 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2060 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -lt 1843 ']' 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -gt 2252 ']' 00:10:02.691 00:10:02.691 real 0m5.184s 00:10:02.691 user 0m0.115s 00:10:02.691 sys 0m0.038s 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.691 11:51:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:10:02.691 ************************************ 00:10:02.691 END TEST bdev_qos_ro_bw 00:10:02.691 ************************************ 00:10:02.691 11:51:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:10:02.691 11:51:48 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.691 11:51:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:03.260 00:10:03.260 Latency(us) 00:10:03.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.260 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:03.260 Malloc_0 : 26.80 23114.87 90.29 0.00 0.00 10969.26 1848.12 503316.48 00:10:03.260 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:03.260 Null_1 : 26.94 22075.84 86.23 0.00 0.00 11570.77 724.17 148478.36 00:10:03.260 =================================================================================================================== 00:10:03.260 Total : 45190.71 176.53 0.00 0.00 11263.92 724.17 503316.48 00:10:03.260 0 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 4082242 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 4082242 ']' 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 4082242 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4082242 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4082242' 00:10:03.260 killing process with pid 4082242 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 4082242 00:10:03.260 Received shutdown signal, test time was about 27.009079 seconds 00:10:03.260 00:10:03.260 Latency(us) 00:10:03.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.260 =================================================================================================================== 00:10:03.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.260 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 4082242 00:10:03.520 11:51:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:10:03.520 00:10:03.520 real 0m28.440s 00:10:03.520 user 0m29.153s 00:10:03.520 sys 0m0.860s 00:10:03.520 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.520 11:51:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 ************************************ 00:10:03.520 END TEST bdev_qos 00:10:03.520 ************************************ 00:10:03.520 11:51:49 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:10:03.520 11:51:49 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.520 11:51:49 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.520 11:51:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 ************************************ 00:10:03.520 START TEST bdev_qd_sampling 00:10:03.520 ************************************ 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=4087097 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 4087097' 00:10:03.520 Process bdev QD sampling period testing pid: 4087097 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 4087097 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 4087097 ']' 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.520 11:51:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:03.780 [2024-07-25 11:51:49.685844] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:03.780 [2024-07-25 11:51:49.685900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087097 ] 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:03.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.780 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:03.780 [2024-07-25 11:51:49.815792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.040 [2024-07-25 11:51:49.902582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.040 [2024-07-25 11:51:49.902588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:04.608 Malloc_QD 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.608 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:04.608 [ 00:10:04.608 { 00:10:04.608 "name": "Malloc_QD", 00:10:04.608 "aliases": [ 00:10:04.608 "59ac74e1-6f2f-42b6-871c-043b2fa54c4b" 00:10:04.608 ], 00:10:04.608 "product_name": "Malloc disk", 00:10:04.608 "block_size": 512, 00:10:04.608 "num_blocks": 262144, 00:10:04.609 "uuid": "59ac74e1-6f2f-42b6-871c-043b2fa54c4b", 00:10:04.609 "assigned_rate_limits": { 00:10:04.609 "rw_ios_per_sec": 0, 00:10:04.609 "rw_mbytes_per_sec": 0, 00:10:04.609 "r_mbytes_per_sec": 0, 00:10:04.609 "w_mbytes_per_sec": 0 00:10:04.609 }, 00:10:04.609 "claimed": false, 00:10:04.609 "zoned": false, 00:10:04.609 "supported_io_types": { 00:10:04.609 "read": true, 00:10:04.609 "write": true, 00:10:04.609 "unmap": true, 00:10:04.609 "flush": true, 00:10:04.609 "reset": true, 00:10:04.609 "nvme_admin": false, 00:10:04.609 "nvme_io": false, 00:10:04.609 "nvme_io_md": false, 00:10:04.609 "write_zeroes": true, 00:10:04.609 "zcopy": true, 00:10:04.609 "get_zone_info": false, 00:10:04.609 "zone_management": false, 00:10:04.609 "zone_append": false, 00:10:04.609 "compare": false, 00:10:04.609 "compare_and_write": false, 00:10:04.609 "abort": true, 00:10:04.609 "seek_hole": false, 00:10:04.609 "seek_data": false, 00:10:04.609 "copy": true, 00:10:04.609 "nvme_iov_md": false 00:10:04.609 }, 00:10:04.609 "memory_domains": [ 00:10:04.609 { 00:10:04.609 "dma_device_id": "system", 00:10:04.609 "dma_device_type": 1 00:10:04.609 }, 00:10:04.609 { 00:10:04.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.609 "dma_device_type": 2 00:10:04.609 } 00:10:04.609 ], 00:10:04.609 "driver_specific": {} 00:10:04.609 } 00:10:04.609 ] 00:10:04.609 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.609 11:51:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:10:04.609 11:51:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:10:04.609 11:51:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:04.868 Running I/O for 5 seconds... 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:10:06.774 "tick_rate": 2500000000, 00:10:06.774 "ticks": 14252139901785218, 00:10:06.774 "bdevs": [ 00:10:06.774 { 00:10:06.774 "name": "Malloc_QD", 00:10:06.774 "bytes_read": 794866176, 00:10:06.774 "num_read_ops": 194052, 00:10:06.774 "bytes_written": 0, 00:10:06.774 "num_write_ops": 0, 00:10:06.774 "bytes_unmapped": 0, 00:10:06.774 "num_unmap_ops": 0, 00:10:06.774 "bytes_copied": 0, 00:10:06.774 "num_copy_ops": 0, 00:10:06.774 "read_latency_ticks": 2448443081376, 00:10:06.774 "max_read_latency_ticks": 15661858, 00:10:06.774 "min_read_latency_ticks": 251078, 00:10:06.774 "write_latency_ticks": 0, 00:10:06.774 "max_write_latency_ticks": 0, 00:10:06.774 "min_write_latency_ticks": 0, 00:10:06.774 "unmap_latency_ticks": 0, 00:10:06.774 "max_unmap_latency_ticks": 0, 00:10:06.774 "min_unmap_latency_ticks": 0, 00:10:06.774 "copy_latency_ticks": 0, 00:10:06.774 "max_copy_latency_ticks": 0, 00:10:06.774 "min_copy_latency_ticks": 0, 00:10:06.774 "io_error": {}, 00:10:06.774 "queue_depth_polling_period": 10, 00:10:06.774 "queue_depth": 512, 00:10:06.774 "io_time": 30, 00:10:06.774 "weighted_io_time": 15360 00:10:06.774 } 00:10:06.774 ] 00:10:06.774 }' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:06.774 00:10:06.774 Latency(us) 00:10:06.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.774 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:06.774 Malloc_QD : 1.99 50510.56 197.31 0.00 0.00 5055.31 1638.40 5321.52 00:10:06.774 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:06.774 Malloc_QD : 1.99 50759.08 198.28 0.00 0.00 5031.37 1428.68 6265.24 00:10:06.774 =================================================================================================================== 00:10:06.774 Total : 101269.64 395.58 0.00 0.00 5043.31 1428.68 6265.24 00:10:06.774 0 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 4087097 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 4087097 ']' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 4087097 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4087097 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4087097' 00:10:06.774 killing process with pid 4087097 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 4087097 00:10:06.774 Received shutdown signal, test time was about 2.078513 seconds 00:10:06.774 00:10:06.774 Latency(us) 00:10:06.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.774 =================================================================================================================== 00:10:06.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:06.774 11:51:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 4087097 00:10:07.071 11:51:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:10:07.071 00:10:07.071 real 0m3.403s 00:10:07.071 user 0m6.679s 00:10:07.071 sys 0m0.434s 00:10:07.071 11:51:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.071 11:51:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 ************************************ 00:10:07.071 END TEST bdev_qd_sampling 00:10:07.071 ************************************ 00:10:07.071 11:51:53 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:10:07.071 11:51:53 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.071 11:51:53 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.071 11:51:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 ************************************ 00:10:07.071 START TEST bdev_error 00:10:07.071 ************************************ 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=4087674 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 4087674' 00:10:07.071 Process error testing pid: 4087674 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:10:07.071 11:51:53 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 4087674 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 4087674 ']' 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.071 11:51:53 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:07.339 [2024-07-25 11:51:53.177576] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:07.339 [2024-07-25 11:51:53.177634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087674 ] 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:07.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.339 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:07.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.340 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:07.340 [2024-07-25 11:51:53.296813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.340 [2024-07-25 11:51:53.381947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:10:08.275 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.275 Dev_1 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.275 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.275 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 [ 00:10:08.276 { 00:10:08.276 "name": "Dev_1", 00:10:08.276 "aliases": [ 00:10:08.276 "518052e2-9f2b-46df-91ea-bcc99de39dfa" 00:10:08.276 ], 00:10:08.276 "product_name": "Malloc disk", 00:10:08.276 "block_size": 512, 00:10:08.276 "num_blocks": 262144, 00:10:08.276 "uuid": "518052e2-9f2b-46df-91ea-bcc99de39dfa", 00:10:08.276 "assigned_rate_limits": { 00:10:08.276 "rw_ios_per_sec": 0, 00:10:08.276 "rw_mbytes_per_sec": 0, 00:10:08.276 "r_mbytes_per_sec": 0, 00:10:08.276 "w_mbytes_per_sec": 0 00:10:08.276 }, 00:10:08.276 "claimed": false, 00:10:08.276 "zoned": false, 00:10:08.276 "supported_io_types": { 00:10:08.276 "read": true, 00:10:08.276 "write": true, 00:10:08.276 "unmap": true, 00:10:08.276 "flush": true, 00:10:08.276 "reset": true, 00:10:08.276 "nvme_admin": false, 00:10:08.276 "nvme_io": false, 00:10:08.276 "nvme_io_md": false, 00:10:08.276 "write_zeroes": true, 00:10:08.276 "zcopy": true, 00:10:08.276 "get_zone_info": false, 00:10:08.276 "zone_management": false, 00:10:08.276 "zone_append": false, 00:10:08.276 "compare": false, 00:10:08.276 "compare_and_write": false, 00:10:08.276 "abort": true, 00:10:08.276 "seek_hole": false, 00:10:08.276 "seek_data": false, 00:10:08.276 "copy": true, 00:10:08.276 "nvme_iov_md": false 00:10:08.276 }, 00:10:08.276 "memory_domains": [ 00:10:08.276 { 00:10:08.276 "dma_device_id": "system", 00:10:08.276 "dma_device_type": 1 00:10:08.276 }, 00:10:08.276 { 00:10:08.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.276 "dma_device_type": 2 00:10:08.276 } 00:10:08.276 ], 00:10:08.276 "driver_specific": {} 00:10:08.276 } 00:10:08.276 ] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:08.276 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 true 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 Dev_2 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 [ 00:10:08.276 { 00:10:08.276 "name": "Dev_2", 00:10:08.276 "aliases": [ 00:10:08.276 "134ae51c-5175-49b0-9310-ff69ee86b442" 00:10:08.276 ], 00:10:08.276 "product_name": "Malloc disk", 00:10:08.276 "block_size": 512, 00:10:08.276 "num_blocks": 262144, 00:10:08.276 "uuid": "134ae51c-5175-49b0-9310-ff69ee86b442", 00:10:08.276 "assigned_rate_limits": { 00:10:08.276 "rw_ios_per_sec": 0, 00:10:08.276 "rw_mbytes_per_sec": 0, 00:10:08.276 "r_mbytes_per_sec": 0, 00:10:08.276 "w_mbytes_per_sec": 0 00:10:08.276 }, 00:10:08.276 "claimed": false, 00:10:08.276 "zoned": false, 00:10:08.276 "supported_io_types": { 00:10:08.276 "read": true, 00:10:08.276 "write": true, 00:10:08.276 "unmap": true, 00:10:08.276 "flush": true, 00:10:08.276 "reset": true, 00:10:08.276 "nvme_admin": false, 00:10:08.276 "nvme_io": false, 00:10:08.276 "nvme_io_md": false, 00:10:08.276 "write_zeroes": true, 00:10:08.276 "zcopy": true, 00:10:08.276 "get_zone_info": false, 00:10:08.276 "zone_management": false, 00:10:08.276 "zone_append": false, 00:10:08.276 "compare": false, 00:10:08.276 "compare_and_write": false, 00:10:08.276 "abort": true, 00:10:08.276 "seek_hole": false, 00:10:08.276 "seek_data": false, 00:10:08.276 "copy": true, 00:10:08.276 "nvme_iov_md": false 00:10:08.276 }, 00:10:08.276 "memory_domains": [ 00:10:08.276 { 00:10:08.276 "dma_device_id": "system", 00:10:08.276 "dma_device_type": 1 00:10:08.276 }, 00:10:08.276 { 00:10:08.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.276 "dma_device_type": 2 00:10:08.276 } 00:10:08.276 ], 00:10:08.276 "driver_specific": {} 00:10:08.276 } 00:10:08.276 ] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:08.276 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 11:51:54 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:10:08.276 11:51:54 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:08.276 Running I/O for 5 seconds... 00:10:09.212 11:51:55 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 4087674 00:10:09.212 11:51:55 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 4087674' 00:10:09.212 Process is existed as continue on error is set. Pid: 4087674 00:10:09.212 11:51:55 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:10:09.212 11:51:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.212 11:51:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:09.212 11:51:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.212 11:51:55 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:10:09.212 11:51:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.212 11:51:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:09.212 11:51:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.212 11:51:55 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:10:09.471 Timeout while waiting for response: 00:10:09.471 00:10:09.471 00:10:13.664 00:10:13.664 Latency(us) 00:10:13.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.664 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:13.664 EE_Dev_1 : 0.90 40248.59 157.22 5.55 0.00 394.23 119.60 642.25 00:10:13.664 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:13.664 Dev_2 : 5.00 87806.19 342.99 0.00 0.00 178.97 61.44 19084.08 00:10:13.664 =================================================================================================================== 00:10:13.664 Total : 128054.77 500.21 5.55 0.00 195.38 61.44 19084.08 00:10:14.233 11:52:00 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 4087674 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 4087674 ']' 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 4087674 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4087674 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4087674' 00:10:14.233 killing process with pid 4087674 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 4087674 00:10:14.233 Received shutdown signal, test time was about 5.000000 seconds 00:10:14.233 00:10:14.233 Latency(us) 00:10:14.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.233 =================================================================================================================== 00:10:14.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:14.233 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 4087674 00:10:14.492 11:52:00 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=4088984 00:10:14.492 11:52:00 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 4088984' 00:10:14.492 Process error testing pid: 4088984 00:10:14.492 11:52:00 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:10:14.492 11:52:00 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 4088984 00:10:14.492 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 4088984 ']' 00:10:14.492 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.492 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.492 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.492 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.492 11:52:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:14.752 [2024-07-25 11:52:00.617317] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:14.752 [2024-07-25 11:52:00.617382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4088984 ] 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:14.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.752 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:14.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.753 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:14.753 [2024-07-25 11:52:00.736984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.753 [2024-07-25 11:52:00.824442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:10:15.688 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.688 Dev_1 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.688 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.688 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.688 [ 00:10:15.688 { 00:10:15.688 "name": "Dev_1", 00:10:15.688 "aliases": [ 00:10:15.688 "97c5ee7b-317b-4d64-a7a0-611306ee6fb9" 00:10:15.688 ], 00:10:15.689 "product_name": "Malloc disk", 00:10:15.689 "block_size": 512, 00:10:15.689 "num_blocks": 262144, 00:10:15.689 "uuid": "97c5ee7b-317b-4d64-a7a0-611306ee6fb9", 00:10:15.689 "assigned_rate_limits": { 00:10:15.689 "rw_ios_per_sec": 0, 00:10:15.689 "rw_mbytes_per_sec": 0, 00:10:15.689 "r_mbytes_per_sec": 0, 00:10:15.689 "w_mbytes_per_sec": 0 00:10:15.689 }, 00:10:15.689 "claimed": false, 00:10:15.689 "zoned": false, 00:10:15.689 "supported_io_types": { 00:10:15.689 "read": true, 00:10:15.689 "write": true, 00:10:15.689 "unmap": true, 00:10:15.689 "flush": true, 00:10:15.689 "reset": true, 00:10:15.689 "nvme_admin": false, 00:10:15.689 "nvme_io": false, 00:10:15.689 "nvme_io_md": false, 00:10:15.689 "write_zeroes": true, 00:10:15.689 "zcopy": true, 00:10:15.689 "get_zone_info": false, 00:10:15.689 "zone_management": false, 00:10:15.689 "zone_append": false, 00:10:15.689 "compare": false, 00:10:15.689 "compare_and_write": false, 00:10:15.689 "abort": true, 00:10:15.689 "seek_hole": false, 00:10:15.689 "seek_data": false, 00:10:15.689 "copy": true, 00:10:15.689 "nvme_iov_md": false 00:10:15.689 }, 00:10:15.689 "memory_domains": [ 00:10:15.689 { 00:10:15.689 "dma_device_id": "system", 00:10:15.689 "dma_device_type": 1 00:10:15.689 }, 00:10:15.689 { 00:10:15.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.689 "dma_device_type": 2 00:10:15.689 } 00:10:15.689 ], 00:10:15.689 "driver_specific": {} 00:10:15.689 } 00:10:15.689 ] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:15.689 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.689 true 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.689 Dev_2 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.689 [ 00:10:15.689 { 00:10:15.689 "name": "Dev_2", 00:10:15.689 "aliases": [ 00:10:15.689 "97570d06-45c3-45c6-bb68-2c4dfeb8a240" 00:10:15.689 ], 00:10:15.689 "product_name": "Malloc disk", 00:10:15.689 "block_size": 512, 00:10:15.689 "num_blocks": 262144, 00:10:15.689 "uuid": "97570d06-45c3-45c6-bb68-2c4dfeb8a240", 00:10:15.689 "assigned_rate_limits": { 00:10:15.689 "rw_ios_per_sec": 0, 00:10:15.689 "rw_mbytes_per_sec": 0, 00:10:15.689 "r_mbytes_per_sec": 0, 00:10:15.689 "w_mbytes_per_sec": 0 00:10:15.689 }, 00:10:15.689 "claimed": false, 00:10:15.689 "zoned": false, 00:10:15.689 "supported_io_types": { 00:10:15.689 "read": true, 00:10:15.689 "write": true, 00:10:15.689 "unmap": true, 00:10:15.689 "flush": true, 00:10:15.689 "reset": true, 00:10:15.689 "nvme_admin": false, 00:10:15.689 "nvme_io": false, 00:10:15.689 "nvme_io_md": false, 00:10:15.689 "write_zeroes": true, 00:10:15.689 "zcopy": true, 00:10:15.689 "get_zone_info": false, 00:10:15.689 "zone_management": false, 00:10:15.689 "zone_append": false, 00:10:15.689 "compare": false, 00:10:15.689 "compare_and_write": false, 00:10:15.689 "abort": true, 00:10:15.689 "seek_hole": false, 00:10:15.689 "seek_data": false, 00:10:15.689 "copy": true, 00:10:15.689 "nvme_iov_md": false 00:10:15.689 }, 00:10:15.689 "memory_domains": [ 00:10:15.689 { 00:10:15.689 "dma_device_id": "system", 00:10:15.689 "dma_device_type": 1 00:10:15.689 }, 00:10:15.689 { 00:10:15.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.689 "dma_device_type": 2 00:10:15.689 } 00:10:15.689 ], 00:10:15.689 "driver_specific": {} 00:10:15.689 } 00:10:15.689 ] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:15.689 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.689 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 4088984 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4088984 00:10:15.689 11:52:01 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.689 11:52:01 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 4088984 00:10:15.689 Running I/O for 5 seconds... 00:10:15.689 task offset: 40120 on job bdev=EE_Dev_1 fails 00:10:15.689 00:10:15.689 Latency(us) 00:10:15.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.689 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:15.689 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:10:15.689 EE_Dev_1 : 0.00 23809.52 93.01 5411.26 0.00 456.37 165.48 809.37 00:10:15.689 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:15.689 Dev_2 : 0.00 15009.38 58.63 0.00 0.00 789.34 156.47 1461.45 00:10:15.689 =================================================================================================================== 00:10:15.689 Total : 38818.90 151.64 5411.26 0.00 636.96 156.47 1461.45 00:10:15.689 [2024-07-25 11:52:01.773631] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.689 request: 00:10:15.689 { 00:10:15.689 "method": "perform_tests", 00:10:15.689 "req_id": 1 00:10:15.689 } 00:10:15.689 Got JSON-RPC error response 00:10:15.689 response: 00:10:15.689 { 00:10:15.689 "code": -32603, 00:10:15.689 "message": "bdevperf failed with error Operation not permitted" 00:10:15.689 } 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.949 00:10:15.949 real 0m8.906s 00:10:15.949 user 0m9.289s 00:10:15.949 sys 0m0.840s 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.949 11:52:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:15.949 ************************************ 00:10:15.949 END TEST bdev_error 00:10:15.949 ************************************ 00:10:16.208 11:52:02 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:10:16.208 11:52:02 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:16.208 11:52:02 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.208 11:52:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:16.208 ************************************ 00:10:16.208 START TEST bdev_stat 00:10:16.208 ************************************ 00:10:16.208 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:10:16.208 11:52:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:10:16.208 11:52:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=4089270 00:10:16.208 11:52:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 4089270' 00:10:16.208 Process Bdev IO statistics testing pid: 4089270 00:10:16.208 11:52:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 4089270 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 4089270 ']' 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.209 11:52:02 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:16.209 [2024-07-25 11:52:02.166347] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:16.209 [2024-07-25 11:52:02.166415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089270 ] 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:16.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.209 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:16.209 [2024-07-25 11:52:02.297813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.468 [2024-07-25 11:52:02.382317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.468 [2024-07-25 11:52:02.382323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:17.036 Malloc_STAT 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.036 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:17.036 [ 00:10:17.036 { 00:10:17.036 "name": "Malloc_STAT", 00:10:17.036 "aliases": [ 00:10:17.036 "a748156e-5253-46a1-bdc0-04b7b22d5132" 00:10:17.036 ], 00:10:17.036 "product_name": "Malloc disk", 00:10:17.036 "block_size": 512, 00:10:17.036 "num_blocks": 262144, 00:10:17.036 "uuid": "a748156e-5253-46a1-bdc0-04b7b22d5132", 00:10:17.036 "assigned_rate_limits": { 00:10:17.036 "rw_ios_per_sec": 0, 00:10:17.036 "rw_mbytes_per_sec": 0, 00:10:17.036 "r_mbytes_per_sec": 0, 00:10:17.036 "w_mbytes_per_sec": 0 00:10:17.036 }, 00:10:17.036 "claimed": false, 00:10:17.036 "zoned": false, 00:10:17.036 "supported_io_types": { 00:10:17.036 "read": true, 00:10:17.036 "write": true, 00:10:17.036 "unmap": true, 00:10:17.036 "flush": true, 00:10:17.036 "reset": true, 00:10:17.036 "nvme_admin": false, 00:10:17.036 "nvme_io": false, 00:10:17.036 "nvme_io_md": false, 00:10:17.036 "write_zeroes": true, 00:10:17.036 "zcopy": true, 00:10:17.036 "get_zone_info": false, 00:10:17.036 "zone_management": false, 00:10:17.036 "zone_append": false, 00:10:17.036 "compare": false, 00:10:17.036 "compare_and_write": false, 00:10:17.036 "abort": true, 00:10:17.036 "seek_hole": false, 00:10:17.036 "seek_data": false, 00:10:17.036 "copy": true, 00:10:17.036 "nvme_iov_md": false 00:10:17.036 }, 00:10:17.036 "memory_domains": [ 00:10:17.036 { 00:10:17.036 "dma_device_id": "system", 00:10:17.036 "dma_device_type": 1 00:10:17.036 }, 00:10:17.036 { 00:10:17.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.036 "dma_device_type": 2 00:10:17.036 } 00:10:17.036 ], 00:10:17.036 "driver_specific": {} 00:10:17.037 } 00:10:17.037 ] 00:10:17.037 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.037 11:52:03 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:10:17.037 11:52:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:10:17.037 11:52:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.296 Running I/O for 10 seconds... 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:10:19.200 "tick_rate": 2500000000, 00:10:19.200 "ticks": 14252170988400320, 00:10:19.200 "bdevs": [ 00:10:19.200 { 00:10:19.200 "name": "Malloc_STAT", 00:10:19.200 "bytes_read": 799060480, 00:10:19.200 "num_read_ops": 195076, 00:10:19.200 "bytes_written": 0, 00:10:19.200 "num_write_ops": 0, 00:10:19.200 "bytes_unmapped": 0, 00:10:19.200 "num_unmap_ops": 0, 00:10:19.200 "bytes_copied": 0, 00:10:19.200 "num_copy_ops": 0, 00:10:19.200 "read_latency_ticks": 2430579168900, 00:10:19.200 "max_read_latency_ticks": 15978492, 00:10:19.200 "min_read_latency_ticks": 256738, 00:10:19.200 "write_latency_ticks": 0, 00:10:19.200 "max_write_latency_ticks": 0, 00:10:19.200 "min_write_latency_ticks": 0, 00:10:19.200 "unmap_latency_ticks": 0, 00:10:19.200 "max_unmap_latency_ticks": 0, 00:10:19.200 "min_unmap_latency_ticks": 0, 00:10:19.200 "copy_latency_ticks": 0, 00:10:19.200 "max_copy_latency_ticks": 0, 00:10:19.200 "min_copy_latency_ticks": 0, 00:10:19.200 "io_error": {} 00:10:19.200 } 00:10:19.200 ] 00:10:19.200 }' 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=195076 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:10:19.200 "tick_rate": 2500000000, 00:10:19.200 "ticks": 14252171152278654, 00:10:19.200 "name": "Malloc_STAT", 00:10:19.200 "channels": [ 00:10:19.200 { 00:10:19.200 "thread_id": 2, 00:10:19.200 "bytes_read": 412090368, 00:10:19.200 "num_read_ops": 100608, 00:10:19.200 "bytes_written": 0, 00:10:19.200 "num_write_ops": 0, 00:10:19.200 "bytes_unmapped": 0, 00:10:19.200 "num_unmap_ops": 0, 00:10:19.200 "bytes_copied": 0, 00:10:19.200 "num_copy_ops": 0, 00:10:19.200 "read_latency_ticks": 1256454191272, 00:10:19.200 "max_read_latency_ticks": 13274802, 00:10:19.200 "min_read_latency_ticks": 8272968, 00:10:19.200 "write_latency_ticks": 0, 00:10:19.200 "max_write_latency_ticks": 0, 00:10:19.200 "min_write_latency_ticks": 0, 00:10:19.200 "unmap_latency_ticks": 0, 00:10:19.200 "max_unmap_latency_ticks": 0, 00:10:19.200 "min_unmap_latency_ticks": 0, 00:10:19.200 "copy_latency_ticks": 0, 00:10:19.200 "max_copy_latency_ticks": 0, 00:10:19.200 "min_copy_latency_ticks": 0 00:10:19.200 }, 00:10:19.200 { 00:10:19.200 "thread_id": 3, 00:10:19.200 "bytes_read": 414187520, 00:10:19.200 "num_read_ops": 101120, 00:10:19.200 "bytes_written": 0, 00:10:19.200 "num_write_ops": 0, 00:10:19.200 "bytes_unmapped": 0, 00:10:19.200 "num_unmap_ops": 0, 00:10:19.200 "bytes_copied": 0, 00:10:19.200 "num_copy_ops": 0, 00:10:19.200 "read_latency_ticks": 1257355180854, 00:10:19.200 "max_read_latency_ticks": 15978492, 00:10:19.200 "min_read_latency_ticks": 8184140, 00:10:19.200 "write_latency_ticks": 0, 00:10:19.200 "max_write_latency_ticks": 0, 00:10:19.200 "min_write_latency_ticks": 0, 00:10:19.200 "unmap_latency_ticks": 0, 00:10:19.200 "max_unmap_latency_ticks": 0, 00:10:19.200 "min_unmap_latency_ticks": 0, 00:10:19.200 "copy_latency_ticks": 0, 00:10:19.200 "max_copy_latency_ticks": 0, 00:10:19.200 "min_copy_latency_ticks": 0 00:10:19.200 } 00:10:19.200 ] 00:10:19.200 }' 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=100608 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=100608 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=101120 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=201728 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.200 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:10:19.460 "tick_rate": 2500000000, 00:10:19.460 "ticks": 14252171428364860, 00:10:19.460 "bdevs": [ 00:10:19.460 { 00:10:19.460 "name": "Malloc_STAT", 00:10:19.460 "bytes_read": 872460800, 00:10:19.460 "num_read_ops": 212996, 00:10:19.460 "bytes_written": 0, 00:10:19.460 "num_write_ops": 0, 00:10:19.460 "bytes_unmapped": 0, 00:10:19.460 "num_unmap_ops": 0, 00:10:19.460 "bytes_copied": 0, 00:10:19.460 "num_copy_ops": 0, 00:10:19.460 "read_latency_ticks": 2654312868838, 00:10:19.460 "max_read_latency_ticks": 15978492, 00:10:19.460 "min_read_latency_ticks": 256738, 00:10:19.460 "write_latency_ticks": 0, 00:10:19.460 "max_write_latency_ticks": 0, 00:10:19.460 "min_write_latency_ticks": 0, 00:10:19.460 "unmap_latency_ticks": 0, 00:10:19.460 "max_unmap_latency_ticks": 0, 00:10:19.460 "min_unmap_latency_ticks": 0, 00:10:19.460 "copy_latency_ticks": 0, 00:10:19.460 "max_copy_latency_ticks": 0, 00:10:19.460 "min_copy_latency_ticks": 0, 00:10:19.460 "io_error": {} 00:10:19.460 } 00:10:19.460 ] 00:10:19.460 }' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=212996 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 201728 -lt 195076 ']' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 201728 -gt 212996 ']' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:19.460 00:10:19.460 Latency(us) 00:10:19.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.460 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:19.460 Malloc_STAT : 2.15 51151.61 199.81 0.00 0.00 4992.94 1323.83 5321.52 00:10:19.460 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:19.460 Malloc_STAT : 2.15 51461.26 201.02 0.00 0.00 4963.49 917.50 6396.31 00:10:19.460 =================================================================================================================== 00:10:19.460 Total : 102612.88 400.83 0.00 0.00 4978.16 917.50 6396.31 00:10:19.460 0 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 4089270 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 4089270 ']' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 4089270 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4089270 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4089270' 00:10:19.460 killing process with pid 4089270 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 4089270 00:10:19.460 Received shutdown signal, test time was about 2.231817 seconds 00:10:19.460 00:10:19.460 Latency(us) 00:10:19.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.460 =================================================================================================================== 00:10:19.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:19.460 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 4089270 00:10:19.719 11:52:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:10:19.719 00:10:19.719 real 0m3.561s 00:10:19.719 user 0m7.107s 00:10:19.719 sys 0m0.470s 00:10:19.719 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.719 11:52:05 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:10:19.719 ************************************ 00:10:19.719 END TEST bdev_stat 00:10:19.719 ************************************ 00:10:19.719 11:52:05 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:10:19.720 11:52:05 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:10:19.720 00:10:19.720 real 1m54.503s 00:10:19.720 user 7m24.100s 00:10:19.720 sys 0m22.266s 00:10:19.720 11:52:05 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.720 11:52:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:19.720 ************************************ 00:10:19.720 END TEST blockdev_general 00:10:19.720 ************************************ 00:10:19.720 11:52:05 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:10:19.720 11:52:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.720 11:52:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.720 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:10:19.720 ************************************ 00:10:19.720 START TEST bdev_raid 00:10:19.720 ************************************ 00:10:19.720 11:52:05 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:10:19.979 * Looking for test storage... 00:10:19.979 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:19.979 11:52:05 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:10:19.979 11:52:05 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:10:19.979 11:52:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.979 11:52:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.979 11:52:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.979 ************************************ 00:10:19.979 START TEST raid_function_test_raid0 00:10:19.979 ************************************ 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=4090070 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 4090070' 00:10:19.979 Process raid pid: 4090070 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 4090070 /var/tmp/spdk-raid.sock 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 4090070 ']' 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:19.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.979 11:52:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:19.979 [2024-07-25 11:52:06.042957] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:19.979 [2024-07-25 11:52:06.043015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:20.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.239 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:20.239 [2024-07-25 11:52:06.176185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.239 [2024-07-25 11:52:06.262351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.239 [2024-07-25 11:52:06.325059] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.239 [2024-07-25 11:52:06.325092] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:10:21.177 11:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:10:21.177 [2024-07-25 11:52:07.181090] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:21.177 [2024-07-25 11:52:07.182444] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:21.177 [2024-07-25 11:52:07.182498] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2605a50 00:10:21.177 [2024-07-25 11:52:07.182508] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:21.177 [2024-07-25 11:52:07.182678] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2468d00 00:10:21.177 [2024-07-25 11:52:07.182793] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2605a50 00:10:21.177 [2024-07-25 11:52:07.182803] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x2605a50 00:10:21.177 [2024-07-25 11:52:07.182893] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.177 Base_1 00:10:21.177 Base_2 00:10:21.177 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:10:21.177 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:10:21.177 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:21.434 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:10:21.691 [2024-07-25 11:52:07.654336] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2448b30 00:10:21.691 /dev/nbd0 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.691 1+0 records in 00:10:21.691 1+0 records out 00:10:21.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262943 s, 15.6 MB/s 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:21.691 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:21.949 { 00:10:21.949 "nbd_device": "/dev/nbd0", 00:10:21.949 "bdev_name": "raid" 00:10:21.949 } 00:10:21.949 ]' 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:21.949 { 00:10:21.949 "nbd_device": "/dev/nbd0", 00:10:21.949 "bdev_name": "raid" 00:10:21.949 } 00:10:21.949 ]' 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:10:21.949 11:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:21.949 4096+0 records in 00:10:21.949 4096+0 records out 00:10:21.949 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.029023 s, 72.3 MB/s 00:10:21.949 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:22.208 4096+0 records in 00:10:22.208 4096+0 records out 00:10:22.208 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.264206 s, 7.9 MB/s 00:10:22.208 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:10:22.208 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:22.467 128+0 records in 00:10:22.467 128+0 records out 00:10:22.467 65536 bytes (66 kB, 64 KiB) copied, 0.00081936 s, 80.0 MB/s 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:10:22.467 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:22.467 2035+0 records in 00:10:22.468 2035+0 records out 00:10:22.468 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0116063 s, 89.8 MB/s 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:22.468 456+0 records in 00:10:22.468 456+0 records out 00:10:22.468 233472 bytes (233 kB, 228 KiB) copied, 0.00269353 s, 86.7 MB/s 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.468 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:22.727 [2024-07-25 11:52:08.660817] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:22.727 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 4090070 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 4090070 ']' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 4090070 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4090070 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4090070' 00:10:22.989 killing process with pid 4090070 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 4090070 00:10:22.989 [2024-07-25 11:52:08.957331] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.989 11:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 4090070 00:10:22.989 [2024-07-25 11:52:08.957386] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.989 [2024-07-25 11:52:08.957424] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.989 [2024-07-25 11:52:08.957435] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2605a50 name raid, state offline 00:10:22.989 [2024-07-25 11:52:08.972929] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.264 11:52:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:10:23.264 00:10:23.264 real 0m3.177s 00:10:23.264 user 0m4.150s 00:10:23.264 sys 0m1.193s 00:10:23.264 11:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.264 11:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:23.264 ************************************ 00:10:23.264 END TEST raid_function_test_raid0 00:10:23.264 ************************************ 00:10:23.264 11:52:09 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:10:23.264 11:52:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.264 11:52:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.264 11:52:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.264 ************************************ 00:10:23.264 START TEST raid_function_test_concat 00:10:23.264 ************************************ 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=4090757 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 4090757' 00:10:23.264 Process raid pid: 4090757 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 4090757 /var/tmp/spdk-raid.sock 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 4090757 ']' 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:23.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:23.264 11:52:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:23.264 [2024-07-25 11:52:09.297075] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:23.264 [2024-07-25 11:52:09.297131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:23.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.264 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:23.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.265 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:23.535 [2024-07-25 11:52:09.430804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.535 [2024-07-25 11:52:09.516541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.535 [2024-07-25 11:52:09.576154] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.535 [2024-07-25 11:52:09.576188] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:10:24.467 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:10:24.725 [2024-07-25 11:52:10.721100] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:24.725 [2024-07-25 11:52:10.722466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:24.725 [2024-07-25 11:52:10.722520] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x172ea50 00:10:24.725 [2024-07-25 11:52:10.722530] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:24.725 [2024-07-25 11:52:10.722699] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1591d00 00:10:24.725 [2024-07-25 11:52:10.722812] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x172ea50 00:10:24.725 [2024-07-25 11:52:10.722821] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x172ea50 00:10:24.725 [2024-07-25 11:52:10.722912] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.725 Base_1 00:10:24.725 Base_2 00:10:24.725 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:10:24.725 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:10:24.725 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:24.983 11:52:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:10:25.549 [2024-07-25 11:52:11.459069] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15719b0 00:10:25.549 /dev/nbd0 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.549 1+0 records in 00:10:25.549 1+0 records out 00:10:25.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208816 s, 19.6 MB/s 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:25.549 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:25.808 { 00:10:25.808 "nbd_device": "/dev/nbd0", 00:10:25.808 "bdev_name": "raid" 00:10:25.808 } 00:10:25.808 ]' 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:25.808 { 00:10:25.808 "nbd_device": "/dev/nbd0", 00:10:25.808 "bdev_name": "raid" 00:10:25.808 } 00:10:25.808 ]' 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:25.808 4096+0 records in 00:10:25.808 4096+0 records out 00:10:25.808 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0297566 s, 70.5 MB/s 00:10:25.808 11:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:26.066 4096+0 records in 00:10:26.066 4096+0 records out 00:10:26.066 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.187762 s, 11.2 MB/s 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:26.066 128+0 records in 00:10:26.066 128+0 records out 00:10:26.066 65536 bytes (66 kB, 64 KiB) copied, 0.000834837 s, 78.5 MB/s 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:26.066 2035+0 records in 00:10:26.066 2035+0 records out 00:10:26.066 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0105251 s, 99.0 MB/s 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:26.066 456+0 records in 00:10:26.066 456+0 records out 00:10:26.066 233472 bytes (233 kB, 228 KiB) copied, 0.00268757 s, 86.9 MB/s 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.066 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:10:26.323 [2024-07-25 11:52:12.386228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:10:26.323 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 4090757 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 4090757 ']' 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 4090757 00:10:26.582 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4090757 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4090757' 00:10:26.840 killing process with pid 4090757 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 4090757 00:10:26.840 [2024-07-25 11:52:12.751613] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.840 [2024-07-25 11:52:12.751670] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.840 [2024-07-25 11:52:12.751708] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.840 [2024-07-25 11:52:12.751719] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x172ea50 name raid, state offline 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 4090757 00:10:26.840 [2024-07-25 11:52:12.767721] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:10:26.840 00:10:26.840 real 0m3.718s 00:10:26.840 user 0m5.257s 00:10:26.840 sys 0m1.241s 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.840 11:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:26.840 ************************************ 00:10:26.840 END TEST raid_function_test_concat 00:10:26.840 ************************************ 00:10:27.100 11:52:12 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:10:27.100 11:52:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:27.100 11:52:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.100 11:52:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.100 ************************************ 00:10:27.100 START TEST raid0_resize_test 00:10:27.100 ************************************ 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid0_resize_test 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=4091383 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 4091383' 00:10:27.100 Process raid pid: 4091383 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 4091383 /var/tmp/spdk-raid.sock 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 4091383 ']' 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:27.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.100 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:27.100 [2024-07-25 11:52:13.091381] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:27.100 [2024-07-25 11:52:13.091437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:27.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.100 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:27.368 [2024-07-25 11:52:13.224289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.368 [2024-07-25 11:52:13.310544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.368 [2024-07-25 11:52:13.369553] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.368 [2024-07-25 11:52:13.369588] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.937 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.937 11:52:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:10:27.937 11:52:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:10:28.196 Base_1 00:10:28.196 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:10:28.454 Base_2 00:10:28.454 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:10:28.454 [2024-07-25 11:52:14.509839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:28.454 [2024-07-25 11:52:14.511175] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:28.454 [2024-07-25 11:52:14.511218] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11d2c80 00:10:28.454 [2024-07-25 11:52:14.511227] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:28.454 [2024-07-25 11:52:14.511402] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd16030 00:10:28.454 [2024-07-25 11:52:14.511495] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11d2c80 00:10:28.454 [2024-07-25 11:52:14.511504] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x11d2c80 00:10:28.454 [2024-07-25 11:52:14.511594] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.454 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:10:28.713 [2024-07-25 11:52:14.738421] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:28.714 [2024-07-25 11:52:14.738437] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:28.714 true 00:10:28.714 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:10:28.714 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:10:28.971 [2024-07-25 11:52:14.967173] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.971 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:10:28.971 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:10:28.971 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:10:28.971 11:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:10:29.230 [2024-07-25 11:52:15.187596] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:29.230 [2024-07-25 11:52:15.187614] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:29.230 [2024-07-25 11:52:15.187637] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:29.230 true 00:10:29.230 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:10:29.230 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:10:29.799 [2024-07-25 11:52:15.685044] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 4091383 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 4091383 ']' 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 4091383 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4091383 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4091383' 00:10:29.799 killing process with pid 4091383 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 4091383 00:10:29.799 [2024-07-25 11:52:15.770670] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.799 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 4091383 00:10:29.799 [2024-07-25 11:52:15.770716] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.799 [2024-07-25 11:52:15.770755] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.799 [2024-07-25 11:52:15.770765] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11d2c80 name Raid, state offline 00:10:29.799 [2024-07-25 11:52:15.771935] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.057 11:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:10:30.057 00:10:30.057 real 0m2.909s 00:10:30.057 user 0m4.483s 00:10:30.057 sys 0m0.647s 00:10:30.057 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.057 11:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.057 ************************************ 00:10:30.057 END TEST raid0_resize_test 00:10:30.057 ************************************ 00:10:30.057 11:52:15 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:10:30.057 11:52:15 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:30.057 11:52:15 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:30.057 11:52:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.057 11:52:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.057 11:52:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.057 ************************************ 00:10:30.058 START TEST raid_state_function_test 00:10:30.058 ************************************ 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4091945 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4091945' 00:10:30.058 Process raid pid: 4091945 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4091945 /var/tmp/spdk-raid.sock 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4091945 ']' 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:30.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.058 11:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:30.058 [2024-07-25 11:52:16.082360] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:30.058 [2024-07-25 11:52:16.082416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.058 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:30.316 [2024-07-25 11:52:16.214857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.316 [2024-07-25 11:52:16.300830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.316 [2024-07-25 11:52:16.360544] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.316 [2024-07-25 11:52:16.360581] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.253 11:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.253 11:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:31.253 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:31.511 [2024-07-25 11:52:17.472801] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.512 [2024-07-25 11:52:17.472837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.512 [2024-07-25 11:52:17.472847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.512 [2024-07-25 11:52:17.472857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.512 11:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.077 11:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.077 "name": "Existed_Raid", 00:10:32.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.077 "strip_size_kb": 64, 00:10:32.077 "state": "configuring", 00:10:32.077 "raid_level": "raid0", 00:10:32.077 "superblock": false, 00:10:32.077 "num_base_bdevs": 2, 00:10:32.077 "num_base_bdevs_discovered": 0, 00:10:32.077 "num_base_bdevs_operational": 2, 00:10:32.077 "base_bdevs_list": [ 00:10:32.077 { 00:10:32.077 "name": "BaseBdev1", 00:10:32.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.077 "is_configured": false, 00:10:32.077 "data_offset": 0, 00:10:32.077 "data_size": 0 00:10:32.077 }, 00:10:32.077 { 00:10:32.077 "name": "BaseBdev2", 00:10:32.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.077 "is_configured": false, 00:10:32.077 "data_offset": 0, 00:10:32.077 "data_size": 0 00:10:32.077 } 00:10:32.077 ] 00:10:32.077 }' 00:10:32.077 11:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.077 11:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 11:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:32.900 [2024-07-25 11:52:18.776085] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.900 [2024-07-25 11:52:18.776113] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaaff20 name Existed_Raid, state configuring 00:10:32.900 11:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:32.900 [2024-07-25 11:52:19.004692] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.900 [2024-07-25 11:52:19.004718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.900 [2024-07-25 11:52:19.004727] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.900 [2024-07-25 11:52:19.004738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.158 [2024-07-25 11:52:19.226536] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.158 BaseBdev1 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.158 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:33.419 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.678 [ 00:10:33.678 { 00:10:33.678 "name": "BaseBdev1", 00:10:33.678 "aliases": [ 00:10:33.678 "722e18c2-3daf-4dd9-89f6-b75de227afaf" 00:10:33.678 ], 00:10:33.678 "product_name": "Malloc disk", 00:10:33.678 "block_size": 512, 00:10:33.678 "num_blocks": 65536, 00:10:33.678 "uuid": "722e18c2-3daf-4dd9-89f6-b75de227afaf", 00:10:33.678 "assigned_rate_limits": { 00:10:33.678 "rw_ios_per_sec": 0, 00:10:33.678 "rw_mbytes_per_sec": 0, 00:10:33.678 "r_mbytes_per_sec": 0, 00:10:33.678 "w_mbytes_per_sec": 0 00:10:33.678 }, 00:10:33.678 "claimed": true, 00:10:33.678 "claim_type": "exclusive_write", 00:10:33.678 "zoned": false, 00:10:33.678 "supported_io_types": { 00:10:33.678 "read": true, 00:10:33.678 "write": true, 00:10:33.678 "unmap": true, 00:10:33.678 "flush": true, 00:10:33.678 "reset": true, 00:10:33.678 "nvme_admin": false, 00:10:33.678 "nvme_io": false, 00:10:33.678 "nvme_io_md": false, 00:10:33.678 "write_zeroes": true, 00:10:33.678 "zcopy": true, 00:10:33.678 "get_zone_info": false, 00:10:33.678 "zone_management": false, 00:10:33.678 "zone_append": false, 00:10:33.678 "compare": false, 00:10:33.678 "compare_and_write": false, 00:10:33.678 "abort": true, 00:10:33.678 "seek_hole": false, 00:10:33.678 "seek_data": false, 00:10:33.678 "copy": true, 00:10:33.678 "nvme_iov_md": false 00:10:33.678 }, 00:10:33.678 "memory_domains": [ 00:10:33.678 { 00:10:33.678 "dma_device_id": "system", 00:10:33.678 "dma_device_type": 1 00:10:33.678 }, 00:10:33.678 { 00:10:33.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.678 "dma_device_type": 2 00:10:33.678 } 00:10:33.678 ], 00:10:33.678 "driver_specific": {} 00:10:33.678 } 00:10:33.678 ] 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.678 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.937 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.937 "name": "Existed_Raid", 00:10:33.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.937 "strip_size_kb": 64, 00:10:33.937 "state": "configuring", 00:10:33.937 "raid_level": "raid0", 00:10:33.937 "superblock": false, 00:10:33.937 "num_base_bdevs": 2, 00:10:33.937 "num_base_bdevs_discovered": 1, 00:10:33.937 "num_base_bdevs_operational": 2, 00:10:33.937 "base_bdevs_list": [ 00:10:33.937 { 00:10:33.937 "name": "BaseBdev1", 00:10:33.937 "uuid": "722e18c2-3daf-4dd9-89f6-b75de227afaf", 00:10:33.937 "is_configured": true, 00:10:33.937 "data_offset": 0, 00:10:33.937 "data_size": 65536 00:10:33.937 }, 00:10:33.937 { 00:10:33.937 "name": "BaseBdev2", 00:10:33.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.937 "is_configured": false, 00:10:33.937 "data_offset": 0, 00:10:33.937 "data_size": 0 00:10:33.937 } 00:10:33.937 ] 00:10:33.937 }' 00:10:33.937 11:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.937 11:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.505 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:34.764 [2024-07-25 11:52:20.698520] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.764 [2024-07-25 11:52:20.698554] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaaf810 name Existed_Raid, state configuring 00:10:34.764 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:35.023 [2024-07-25 11:52:20.927157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.023 [2024-07-25 11:52:20.928546] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.023 [2024-07-25 11:52:20.928576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.023 11:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.282 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.282 "name": "Existed_Raid", 00:10:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.282 "strip_size_kb": 64, 00:10:35.282 "state": "configuring", 00:10:35.282 "raid_level": "raid0", 00:10:35.282 "superblock": false, 00:10:35.282 "num_base_bdevs": 2, 00:10:35.282 "num_base_bdevs_discovered": 1, 00:10:35.282 "num_base_bdevs_operational": 2, 00:10:35.282 "base_bdevs_list": [ 00:10:35.282 { 00:10:35.282 "name": "BaseBdev1", 00:10:35.282 "uuid": "722e18c2-3daf-4dd9-89f6-b75de227afaf", 00:10:35.282 "is_configured": true, 00:10:35.282 "data_offset": 0, 00:10:35.282 "data_size": 65536 00:10:35.282 }, 00:10:35.282 { 00:10:35.282 "name": "BaseBdev2", 00:10:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.282 "is_configured": false, 00:10:35.282 "data_offset": 0, 00:10:35.283 "data_size": 0 00:10:35.283 } 00:10:35.283 ] 00:10:35.283 }' 00:10:35.283 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.283 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.910 [2024-07-25 11:52:21.973037] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.910 [2024-07-25 11:52:21.973068] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xab0600 00:10:35.910 [2024-07-25 11:52:21.973076] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:35.910 [2024-07-25 11:52:21.973265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xab1ef0 00:10:35.910 [2024-07-25 11:52:21.973376] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xab0600 00:10:35.910 [2024-07-25 11:52:21.973385] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xab0600 00:10:35.910 [2024-07-25 11:52:21.973533] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.910 BaseBdev2 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.910 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:36.169 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.427 [ 00:10:36.427 { 00:10:36.427 "name": "BaseBdev2", 00:10:36.427 "aliases": [ 00:10:36.427 "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4" 00:10:36.427 ], 00:10:36.427 "product_name": "Malloc disk", 00:10:36.427 "block_size": 512, 00:10:36.427 "num_blocks": 65536, 00:10:36.427 "uuid": "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4", 00:10:36.428 "assigned_rate_limits": { 00:10:36.428 "rw_ios_per_sec": 0, 00:10:36.428 "rw_mbytes_per_sec": 0, 00:10:36.428 "r_mbytes_per_sec": 0, 00:10:36.428 "w_mbytes_per_sec": 0 00:10:36.428 }, 00:10:36.428 "claimed": true, 00:10:36.428 "claim_type": "exclusive_write", 00:10:36.428 "zoned": false, 00:10:36.428 "supported_io_types": { 00:10:36.428 "read": true, 00:10:36.428 "write": true, 00:10:36.428 "unmap": true, 00:10:36.428 "flush": true, 00:10:36.428 "reset": true, 00:10:36.428 "nvme_admin": false, 00:10:36.428 "nvme_io": false, 00:10:36.428 "nvme_io_md": false, 00:10:36.428 "write_zeroes": true, 00:10:36.428 "zcopy": true, 00:10:36.428 "get_zone_info": false, 00:10:36.428 "zone_management": false, 00:10:36.428 "zone_append": false, 00:10:36.428 "compare": false, 00:10:36.428 "compare_and_write": false, 00:10:36.428 "abort": true, 00:10:36.428 "seek_hole": false, 00:10:36.428 "seek_data": false, 00:10:36.428 "copy": true, 00:10:36.428 "nvme_iov_md": false 00:10:36.428 }, 00:10:36.428 "memory_domains": [ 00:10:36.428 { 00:10:36.428 "dma_device_id": "system", 00:10:36.428 "dma_device_type": 1 00:10:36.428 }, 00:10:36.428 { 00:10:36.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.428 "dma_device_type": 2 00:10:36.428 } 00:10:36.428 ], 00:10:36.428 "driver_specific": {} 00:10:36.428 } 00:10:36.428 ] 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.428 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.687 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.687 "name": "Existed_Raid", 00:10:36.687 "uuid": "9fbeff9c-1ebd-403d-9742-63d14aeb5812", 00:10:36.687 "strip_size_kb": 64, 00:10:36.687 "state": "online", 00:10:36.687 "raid_level": "raid0", 00:10:36.687 "superblock": false, 00:10:36.687 "num_base_bdevs": 2, 00:10:36.687 "num_base_bdevs_discovered": 2, 00:10:36.687 "num_base_bdevs_operational": 2, 00:10:36.687 "base_bdevs_list": [ 00:10:36.687 { 00:10:36.687 "name": "BaseBdev1", 00:10:36.687 "uuid": "722e18c2-3daf-4dd9-89f6-b75de227afaf", 00:10:36.687 "is_configured": true, 00:10:36.687 "data_offset": 0, 00:10:36.687 "data_size": 65536 00:10:36.687 }, 00:10:36.687 { 00:10:36.687 "name": "BaseBdev2", 00:10:36.687 "uuid": "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4", 00:10:36.687 "is_configured": true, 00:10:36.687 "data_offset": 0, 00:10:36.687 "data_size": 65536 00:10:36.687 } 00:10:36.687 ] 00:10:36.687 }' 00:10:36.687 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.687 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:37.255 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:37.513 [2024-07-25 11:52:23.445155] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.513 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:37.513 "name": "Existed_Raid", 00:10:37.513 "aliases": [ 00:10:37.513 "9fbeff9c-1ebd-403d-9742-63d14aeb5812" 00:10:37.513 ], 00:10:37.513 "product_name": "Raid Volume", 00:10:37.513 "block_size": 512, 00:10:37.513 "num_blocks": 131072, 00:10:37.513 "uuid": "9fbeff9c-1ebd-403d-9742-63d14aeb5812", 00:10:37.513 "assigned_rate_limits": { 00:10:37.513 "rw_ios_per_sec": 0, 00:10:37.513 "rw_mbytes_per_sec": 0, 00:10:37.513 "r_mbytes_per_sec": 0, 00:10:37.513 "w_mbytes_per_sec": 0 00:10:37.513 }, 00:10:37.513 "claimed": false, 00:10:37.513 "zoned": false, 00:10:37.513 "supported_io_types": { 00:10:37.513 "read": true, 00:10:37.513 "write": true, 00:10:37.513 "unmap": true, 00:10:37.513 "flush": true, 00:10:37.513 "reset": true, 00:10:37.513 "nvme_admin": false, 00:10:37.513 "nvme_io": false, 00:10:37.513 "nvme_io_md": false, 00:10:37.513 "write_zeroes": true, 00:10:37.513 "zcopy": false, 00:10:37.513 "get_zone_info": false, 00:10:37.513 "zone_management": false, 00:10:37.513 "zone_append": false, 00:10:37.513 "compare": false, 00:10:37.513 "compare_and_write": false, 00:10:37.513 "abort": false, 00:10:37.513 "seek_hole": false, 00:10:37.513 "seek_data": false, 00:10:37.513 "copy": false, 00:10:37.513 "nvme_iov_md": false 00:10:37.513 }, 00:10:37.513 "memory_domains": [ 00:10:37.513 { 00:10:37.513 "dma_device_id": "system", 00:10:37.513 "dma_device_type": 1 00:10:37.513 }, 00:10:37.513 { 00:10:37.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.513 "dma_device_type": 2 00:10:37.513 }, 00:10:37.513 { 00:10:37.513 "dma_device_id": "system", 00:10:37.513 "dma_device_type": 1 00:10:37.513 }, 00:10:37.513 { 00:10:37.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.513 "dma_device_type": 2 00:10:37.513 } 00:10:37.513 ], 00:10:37.513 "driver_specific": { 00:10:37.513 "raid": { 00:10:37.513 "uuid": "9fbeff9c-1ebd-403d-9742-63d14aeb5812", 00:10:37.513 "strip_size_kb": 64, 00:10:37.513 "state": "online", 00:10:37.513 "raid_level": "raid0", 00:10:37.513 "superblock": false, 00:10:37.513 "num_base_bdevs": 2, 00:10:37.513 "num_base_bdevs_discovered": 2, 00:10:37.513 "num_base_bdevs_operational": 2, 00:10:37.513 "base_bdevs_list": [ 00:10:37.513 { 00:10:37.513 "name": "BaseBdev1", 00:10:37.513 "uuid": "722e18c2-3daf-4dd9-89f6-b75de227afaf", 00:10:37.513 "is_configured": true, 00:10:37.513 "data_offset": 0, 00:10:37.513 "data_size": 65536 00:10:37.513 }, 00:10:37.513 { 00:10:37.513 "name": "BaseBdev2", 00:10:37.513 "uuid": "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4", 00:10:37.513 "is_configured": true, 00:10:37.513 "data_offset": 0, 00:10:37.513 "data_size": 65536 00:10:37.513 } 00:10:37.513 ] 00:10:37.513 } 00:10:37.513 } 00:10:37.513 }' 00:10:37.513 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.513 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:37.513 BaseBdev2' 00:10:37.513 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:37.513 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:37.513 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:37.772 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:37.772 "name": "BaseBdev1", 00:10:37.772 "aliases": [ 00:10:37.772 "722e18c2-3daf-4dd9-89f6-b75de227afaf" 00:10:37.772 ], 00:10:37.772 "product_name": "Malloc disk", 00:10:37.772 "block_size": 512, 00:10:37.772 "num_blocks": 65536, 00:10:37.772 "uuid": "722e18c2-3daf-4dd9-89f6-b75de227afaf", 00:10:37.772 "assigned_rate_limits": { 00:10:37.772 "rw_ios_per_sec": 0, 00:10:37.772 "rw_mbytes_per_sec": 0, 00:10:37.772 "r_mbytes_per_sec": 0, 00:10:37.772 "w_mbytes_per_sec": 0 00:10:37.772 }, 00:10:37.772 "claimed": true, 00:10:37.772 "claim_type": "exclusive_write", 00:10:37.772 "zoned": false, 00:10:37.772 "supported_io_types": { 00:10:37.772 "read": true, 00:10:37.772 "write": true, 00:10:37.772 "unmap": true, 00:10:37.772 "flush": true, 00:10:37.772 "reset": true, 00:10:37.772 "nvme_admin": false, 00:10:37.772 "nvme_io": false, 00:10:37.772 "nvme_io_md": false, 00:10:37.772 "write_zeroes": true, 00:10:37.772 "zcopy": true, 00:10:37.772 "get_zone_info": false, 00:10:37.772 "zone_management": false, 00:10:37.772 "zone_append": false, 00:10:37.772 "compare": false, 00:10:37.772 "compare_and_write": false, 00:10:37.772 "abort": true, 00:10:37.772 "seek_hole": false, 00:10:37.772 "seek_data": false, 00:10:37.772 "copy": true, 00:10:37.772 "nvme_iov_md": false 00:10:37.772 }, 00:10:37.772 "memory_domains": [ 00:10:37.772 { 00:10:37.772 "dma_device_id": "system", 00:10:37.772 "dma_device_type": 1 00:10:37.772 }, 00:10:37.772 { 00:10:37.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.772 "dma_device_type": 2 00:10:37.772 } 00:10:37.772 ], 00:10:37.772 "driver_specific": {} 00:10:37.772 }' 00:10:37.772 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.772 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.772 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.772 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.772 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.031 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:38.031 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.031 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.031 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:38.031 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.031 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.031 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:38.031 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:38.031 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:38.031 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:38.291 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:38.291 "name": "BaseBdev2", 00:10:38.291 "aliases": [ 00:10:38.291 "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4" 00:10:38.291 ], 00:10:38.291 "product_name": "Malloc disk", 00:10:38.291 "block_size": 512, 00:10:38.291 "num_blocks": 65536, 00:10:38.291 "uuid": "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4", 00:10:38.291 "assigned_rate_limits": { 00:10:38.291 "rw_ios_per_sec": 0, 00:10:38.291 "rw_mbytes_per_sec": 0, 00:10:38.291 "r_mbytes_per_sec": 0, 00:10:38.291 "w_mbytes_per_sec": 0 00:10:38.291 }, 00:10:38.291 "claimed": true, 00:10:38.291 "claim_type": "exclusive_write", 00:10:38.291 "zoned": false, 00:10:38.291 "supported_io_types": { 00:10:38.291 "read": true, 00:10:38.291 "write": true, 00:10:38.291 "unmap": true, 00:10:38.291 "flush": true, 00:10:38.291 "reset": true, 00:10:38.291 "nvme_admin": false, 00:10:38.291 "nvme_io": false, 00:10:38.291 "nvme_io_md": false, 00:10:38.291 "write_zeroes": true, 00:10:38.291 "zcopy": true, 00:10:38.291 "get_zone_info": false, 00:10:38.291 "zone_management": false, 00:10:38.291 "zone_append": false, 00:10:38.291 "compare": false, 00:10:38.291 "compare_and_write": false, 00:10:38.291 "abort": true, 00:10:38.291 "seek_hole": false, 00:10:38.291 "seek_data": false, 00:10:38.291 "copy": true, 00:10:38.291 "nvme_iov_md": false 00:10:38.291 }, 00:10:38.291 "memory_domains": [ 00:10:38.291 { 00:10:38.291 "dma_device_id": "system", 00:10:38.291 "dma_device_type": 1 00:10:38.291 }, 00:10:38.291 { 00:10:38.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.291 "dma_device_type": 2 00:10:38.291 } 00:10:38.291 ], 00:10:38.291 "driver_specific": {} 00:10:38.291 }' 00:10:38.291 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.291 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.291 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:38.291 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.291 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:38.551 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:38.810 [2024-07-25 11:52:24.820579] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.811 [2024-07-25 11:52:24.820603] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.811 [2024-07-25 11:52:24.820640] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.811 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.070 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.070 "name": "Existed_Raid", 00:10:39.070 "uuid": "9fbeff9c-1ebd-403d-9742-63d14aeb5812", 00:10:39.070 "strip_size_kb": 64, 00:10:39.070 "state": "offline", 00:10:39.070 "raid_level": "raid0", 00:10:39.070 "superblock": false, 00:10:39.070 "num_base_bdevs": 2, 00:10:39.070 "num_base_bdevs_discovered": 1, 00:10:39.070 "num_base_bdevs_operational": 1, 00:10:39.070 "base_bdevs_list": [ 00:10:39.070 { 00:10:39.070 "name": null, 00:10:39.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.070 "is_configured": false, 00:10:39.070 "data_offset": 0, 00:10:39.070 "data_size": 65536 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "name": "BaseBdev2", 00:10:39.070 "uuid": "40b00d3a-fe5e-4a74-964e-ab7e9f3db7c4", 00:10:39.070 "is_configured": true, 00:10:39.070 "data_offset": 0, 00:10:39.070 "data_size": 65536 00:10:39.070 } 00:10:39.070 ] 00:10:39.070 }' 00:10:39.070 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.070 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.638 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:39.638 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:39.638 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:39.638 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.897 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:39.897 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.897 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:40.157 [2024-07-25 11:52:26.089031] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.157 [2024-07-25 11:52:26.089074] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xab0600 name Existed_Raid, state offline 00:10:40.157 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:40.157 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:40.157 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.157 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4091945 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4091945 ']' 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4091945 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4091945 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4091945' 00:10:40.416 killing process with pid 4091945 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4091945 00:10:40.416 [2024-07-25 11:52:26.406618] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.416 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4091945 00:10:40.416 [2024-07-25 11:52:26.407482] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:40.675 00:10:40.675 real 0m10.574s 00:10:40.675 user 0m18.828s 00:10:40.675 sys 0m1.925s 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.675 ************************************ 00:10:40.675 END TEST raid_state_function_test 00:10:40.675 ************************************ 00:10:40.675 11:52:26 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:40.675 11:52:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:40.675 11:52:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.675 11:52:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.675 ************************************ 00:10:40.675 START TEST raid_state_function_test_sb 00:10:40.675 ************************************ 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4094020 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4094020' 00:10:40.675 Process raid pid: 4094020 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4094020 /var/tmp/spdk-raid.sock 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4094020 ']' 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:40.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.675 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.675 [2024-07-25 11:52:26.747042] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:40.675 [2024-07-25 11:52:26.747100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:40.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.934 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:40.934 [2024-07-25 11:52:26.882745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.934 [2024-07-25 11:52:26.968924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.934 [2024-07-25 11:52:27.028999] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.934 [2024-07-25 11:52:27.029034] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:41.871 [2024-07-25 11:52:27.864711] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.871 [2024-07-25 11:52:27.864748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.871 [2024-07-25 11:52:27.864759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.871 [2024-07-25 11:52:27.864770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.871 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.130 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.130 "name": "Existed_Raid", 00:10:42.130 "uuid": "402c7ffd-0a16-401b-ac1a-906a20c9279e", 00:10:42.130 "strip_size_kb": 64, 00:10:42.130 "state": "configuring", 00:10:42.130 "raid_level": "raid0", 00:10:42.130 "superblock": true, 00:10:42.130 "num_base_bdevs": 2, 00:10:42.130 "num_base_bdevs_discovered": 0, 00:10:42.130 "num_base_bdevs_operational": 2, 00:10:42.130 "base_bdevs_list": [ 00:10:42.130 { 00:10:42.130 "name": "BaseBdev1", 00:10:42.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.130 "is_configured": false, 00:10:42.130 "data_offset": 0, 00:10:42.130 "data_size": 0 00:10:42.130 }, 00:10:42.130 { 00:10:42.130 "name": "BaseBdev2", 00:10:42.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.130 "is_configured": false, 00:10:42.130 "data_offset": 0, 00:10:42.130 "data_size": 0 00:10:42.130 } 00:10:42.130 ] 00:10:42.130 }' 00:10:42.130 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.130 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.698 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:42.957 [2024-07-25 11:52:28.899288] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.957 [2024-07-25 11:52:28.899315] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ddcf20 name Existed_Raid, state configuring 00:10:42.957 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:43.217 [2024-07-25 11:52:29.127900] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.217 [2024-07-25 11:52:29.127926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.217 [2024-07-25 11:52:29.127935] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.217 [2024-07-25 11:52:29.127946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.217 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.476 [2024-07-25 11:52:29.362162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.476 BaseBdev1 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:43.476 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.735 [ 00:10:43.735 { 00:10:43.735 "name": "BaseBdev1", 00:10:43.735 "aliases": [ 00:10:43.735 "9384f65c-b780-473d-83f6-065b08495d64" 00:10:43.735 ], 00:10:43.735 "product_name": "Malloc disk", 00:10:43.735 "block_size": 512, 00:10:43.735 "num_blocks": 65536, 00:10:43.735 "uuid": "9384f65c-b780-473d-83f6-065b08495d64", 00:10:43.735 "assigned_rate_limits": { 00:10:43.735 "rw_ios_per_sec": 0, 00:10:43.735 "rw_mbytes_per_sec": 0, 00:10:43.735 "r_mbytes_per_sec": 0, 00:10:43.735 "w_mbytes_per_sec": 0 00:10:43.735 }, 00:10:43.735 "claimed": true, 00:10:43.736 "claim_type": "exclusive_write", 00:10:43.736 "zoned": false, 00:10:43.736 "supported_io_types": { 00:10:43.736 "read": true, 00:10:43.736 "write": true, 00:10:43.736 "unmap": true, 00:10:43.736 "flush": true, 00:10:43.736 "reset": true, 00:10:43.736 "nvme_admin": false, 00:10:43.736 "nvme_io": false, 00:10:43.736 "nvme_io_md": false, 00:10:43.736 "write_zeroes": true, 00:10:43.736 "zcopy": true, 00:10:43.736 "get_zone_info": false, 00:10:43.736 "zone_management": false, 00:10:43.736 "zone_append": false, 00:10:43.736 "compare": false, 00:10:43.736 "compare_and_write": false, 00:10:43.736 "abort": true, 00:10:43.736 "seek_hole": false, 00:10:43.736 "seek_data": false, 00:10:43.736 "copy": true, 00:10:43.736 "nvme_iov_md": false 00:10:43.736 }, 00:10:43.736 "memory_domains": [ 00:10:43.736 { 00:10:43.736 "dma_device_id": "system", 00:10:43.736 "dma_device_type": 1 00:10:43.736 }, 00:10:43.736 { 00:10:43.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.736 "dma_device_type": 2 00:10:43.736 } 00:10:43.736 ], 00:10:43.736 "driver_specific": {} 00:10:43.736 } 00:10:43.736 ] 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.736 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.995 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.995 "name": "Existed_Raid", 00:10:43.995 "uuid": "98d9f083-4c55-4e2d-8833-d13fdad15717", 00:10:43.995 "strip_size_kb": 64, 00:10:43.995 "state": "configuring", 00:10:43.995 "raid_level": "raid0", 00:10:43.995 "superblock": true, 00:10:43.995 "num_base_bdevs": 2, 00:10:43.995 "num_base_bdevs_discovered": 1, 00:10:43.995 "num_base_bdevs_operational": 2, 00:10:43.995 "base_bdevs_list": [ 00:10:43.995 { 00:10:43.995 "name": "BaseBdev1", 00:10:43.995 "uuid": "9384f65c-b780-473d-83f6-065b08495d64", 00:10:43.995 "is_configured": true, 00:10:43.995 "data_offset": 2048, 00:10:43.995 "data_size": 63488 00:10:43.995 }, 00:10:43.995 { 00:10:43.995 "name": "BaseBdev2", 00:10:43.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.995 "is_configured": false, 00:10:43.995 "data_offset": 0, 00:10:43.995 "data_size": 0 00:10:43.995 } 00:10:43.995 ] 00:10:43.995 }' 00:10:43.995 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.995 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.564 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:44.823 [2024-07-25 11:52:30.805941] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.823 [2024-07-25 11:52:30.805973] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ddc810 name Existed_Raid, state configuring 00:10:44.823 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:45.081 [2024-07-25 11:52:31.030562] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.081 [2024-07-25 11:52:31.031953] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.081 [2024-07-25 11:52:31.031985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.081 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.339 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.339 "name": "Existed_Raid", 00:10:45.339 "uuid": "e68de662-dd2d-4af4-bf1a-73c4ce59cc56", 00:10:45.339 "strip_size_kb": 64, 00:10:45.339 "state": "configuring", 00:10:45.339 "raid_level": "raid0", 00:10:45.339 "superblock": true, 00:10:45.339 "num_base_bdevs": 2, 00:10:45.339 "num_base_bdevs_discovered": 1, 00:10:45.339 "num_base_bdevs_operational": 2, 00:10:45.339 "base_bdevs_list": [ 00:10:45.339 { 00:10:45.339 "name": "BaseBdev1", 00:10:45.339 "uuid": "9384f65c-b780-473d-83f6-065b08495d64", 00:10:45.339 "is_configured": true, 00:10:45.339 "data_offset": 2048, 00:10:45.339 "data_size": 63488 00:10:45.339 }, 00:10:45.339 { 00:10:45.339 "name": "BaseBdev2", 00:10:45.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.339 "is_configured": false, 00:10:45.339 "data_offset": 0, 00:10:45.339 "data_size": 0 00:10:45.339 } 00:10:45.339 ] 00:10:45.339 }' 00:10:45.339 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.339 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.166 [2024-07-25 11:52:32.052371] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.166 [2024-07-25 11:52:32.052506] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ddd600 00:10:46.166 [2024-07-25 11:52:32.052519] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:46.166 [2024-07-25 11:52:32.052681] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dde840 00:10:46.166 [2024-07-25 11:52:32.052788] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ddd600 00:10:46.166 [2024-07-25 11:52:32.052797] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ddd600 00:10:46.166 [2024-07-25 11:52:32.052885] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.166 BaseBdev2 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.166 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.426 [ 00:10:46.426 { 00:10:46.426 "name": "BaseBdev2", 00:10:46.426 "aliases": [ 00:10:46.426 "627ae23b-8b2b-43e6-9f49-a67954471bb7" 00:10:46.426 ], 00:10:46.426 "product_name": "Malloc disk", 00:10:46.426 "block_size": 512, 00:10:46.426 "num_blocks": 65536, 00:10:46.426 "uuid": "627ae23b-8b2b-43e6-9f49-a67954471bb7", 00:10:46.426 "assigned_rate_limits": { 00:10:46.426 "rw_ios_per_sec": 0, 00:10:46.426 "rw_mbytes_per_sec": 0, 00:10:46.426 "r_mbytes_per_sec": 0, 00:10:46.426 "w_mbytes_per_sec": 0 00:10:46.426 }, 00:10:46.426 "claimed": true, 00:10:46.426 "claim_type": "exclusive_write", 00:10:46.426 "zoned": false, 00:10:46.426 "supported_io_types": { 00:10:46.426 "read": true, 00:10:46.426 "write": true, 00:10:46.426 "unmap": true, 00:10:46.426 "flush": true, 00:10:46.426 "reset": true, 00:10:46.426 "nvme_admin": false, 00:10:46.426 "nvme_io": false, 00:10:46.426 "nvme_io_md": false, 00:10:46.426 "write_zeroes": true, 00:10:46.426 "zcopy": true, 00:10:46.426 "get_zone_info": false, 00:10:46.426 "zone_management": false, 00:10:46.426 "zone_append": false, 00:10:46.426 "compare": false, 00:10:46.426 "compare_and_write": false, 00:10:46.426 "abort": true, 00:10:46.426 "seek_hole": false, 00:10:46.426 "seek_data": false, 00:10:46.426 "copy": true, 00:10:46.426 "nvme_iov_md": false 00:10:46.426 }, 00:10:46.426 "memory_domains": [ 00:10:46.426 { 00:10:46.426 "dma_device_id": "system", 00:10:46.426 "dma_device_type": 1 00:10:46.426 }, 00:10:46.426 { 00:10:46.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.426 "dma_device_type": 2 00:10:46.426 } 00:10:46.426 ], 00:10:46.426 "driver_specific": {} 00:10:46.426 } 00:10:46.426 ] 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.426 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.686 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.686 "name": "Existed_Raid", 00:10:46.686 "uuid": "e68de662-dd2d-4af4-bf1a-73c4ce59cc56", 00:10:46.686 "strip_size_kb": 64, 00:10:46.686 "state": "online", 00:10:46.686 "raid_level": "raid0", 00:10:46.686 "superblock": true, 00:10:46.686 "num_base_bdevs": 2, 00:10:46.686 "num_base_bdevs_discovered": 2, 00:10:46.686 "num_base_bdevs_operational": 2, 00:10:46.686 "base_bdevs_list": [ 00:10:46.686 { 00:10:46.686 "name": "BaseBdev1", 00:10:46.686 "uuid": "9384f65c-b780-473d-83f6-065b08495d64", 00:10:46.686 "is_configured": true, 00:10:46.686 "data_offset": 2048, 00:10:46.686 "data_size": 63488 00:10:46.686 }, 00:10:46.686 { 00:10:46.686 "name": "BaseBdev2", 00:10:46.686 "uuid": "627ae23b-8b2b-43e6-9f49-a67954471bb7", 00:10:46.686 "is_configured": true, 00:10:46.686 "data_offset": 2048, 00:10:46.686 "data_size": 63488 00:10:46.686 } 00:10:46.686 ] 00:10:46.686 }' 00:10:46.686 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.686 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:47.254 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:47.513 [2024-07-25 11:52:33.508455] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.513 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:47.513 "name": "Existed_Raid", 00:10:47.513 "aliases": [ 00:10:47.513 "e68de662-dd2d-4af4-bf1a-73c4ce59cc56" 00:10:47.513 ], 00:10:47.513 "product_name": "Raid Volume", 00:10:47.513 "block_size": 512, 00:10:47.513 "num_blocks": 126976, 00:10:47.513 "uuid": "e68de662-dd2d-4af4-bf1a-73c4ce59cc56", 00:10:47.513 "assigned_rate_limits": { 00:10:47.513 "rw_ios_per_sec": 0, 00:10:47.513 "rw_mbytes_per_sec": 0, 00:10:47.513 "r_mbytes_per_sec": 0, 00:10:47.513 "w_mbytes_per_sec": 0 00:10:47.513 }, 00:10:47.513 "claimed": false, 00:10:47.513 "zoned": false, 00:10:47.513 "supported_io_types": { 00:10:47.513 "read": true, 00:10:47.513 "write": true, 00:10:47.513 "unmap": true, 00:10:47.513 "flush": true, 00:10:47.513 "reset": true, 00:10:47.513 "nvme_admin": false, 00:10:47.513 "nvme_io": false, 00:10:47.513 "nvme_io_md": false, 00:10:47.513 "write_zeroes": true, 00:10:47.513 "zcopy": false, 00:10:47.513 "get_zone_info": false, 00:10:47.513 "zone_management": false, 00:10:47.513 "zone_append": false, 00:10:47.513 "compare": false, 00:10:47.513 "compare_and_write": false, 00:10:47.513 "abort": false, 00:10:47.513 "seek_hole": false, 00:10:47.513 "seek_data": false, 00:10:47.513 "copy": false, 00:10:47.513 "nvme_iov_md": false 00:10:47.513 }, 00:10:47.513 "memory_domains": [ 00:10:47.513 { 00:10:47.513 "dma_device_id": "system", 00:10:47.513 "dma_device_type": 1 00:10:47.513 }, 00:10:47.513 { 00:10:47.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.513 "dma_device_type": 2 00:10:47.513 }, 00:10:47.513 { 00:10:47.513 "dma_device_id": "system", 00:10:47.513 "dma_device_type": 1 00:10:47.513 }, 00:10:47.513 { 00:10:47.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.513 "dma_device_type": 2 00:10:47.513 } 00:10:47.513 ], 00:10:47.513 "driver_specific": { 00:10:47.513 "raid": { 00:10:47.513 "uuid": "e68de662-dd2d-4af4-bf1a-73c4ce59cc56", 00:10:47.513 "strip_size_kb": 64, 00:10:47.513 "state": "online", 00:10:47.513 "raid_level": "raid0", 00:10:47.513 "superblock": true, 00:10:47.513 "num_base_bdevs": 2, 00:10:47.513 "num_base_bdevs_discovered": 2, 00:10:47.513 "num_base_bdevs_operational": 2, 00:10:47.513 "base_bdevs_list": [ 00:10:47.513 { 00:10:47.513 "name": "BaseBdev1", 00:10:47.513 "uuid": "9384f65c-b780-473d-83f6-065b08495d64", 00:10:47.513 "is_configured": true, 00:10:47.513 "data_offset": 2048, 00:10:47.513 "data_size": 63488 00:10:47.513 }, 00:10:47.513 { 00:10:47.513 "name": "BaseBdev2", 00:10:47.513 "uuid": "627ae23b-8b2b-43e6-9f49-a67954471bb7", 00:10:47.513 "is_configured": true, 00:10:47.513 "data_offset": 2048, 00:10:47.513 "data_size": 63488 00:10:47.513 } 00:10:47.513 ] 00:10:47.513 } 00:10:47.513 } 00:10:47.513 }' 00:10:47.513 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.513 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:47.513 BaseBdev2' 00:10:47.513 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:47.513 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:47.513 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:47.772 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:47.773 "name": "BaseBdev1", 00:10:47.773 "aliases": [ 00:10:47.773 "9384f65c-b780-473d-83f6-065b08495d64" 00:10:47.773 ], 00:10:47.773 "product_name": "Malloc disk", 00:10:47.773 "block_size": 512, 00:10:47.773 "num_blocks": 65536, 00:10:47.773 "uuid": "9384f65c-b780-473d-83f6-065b08495d64", 00:10:47.773 "assigned_rate_limits": { 00:10:47.773 "rw_ios_per_sec": 0, 00:10:47.773 "rw_mbytes_per_sec": 0, 00:10:47.773 "r_mbytes_per_sec": 0, 00:10:47.773 "w_mbytes_per_sec": 0 00:10:47.773 }, 00:10:47.773 "claimed": true, 00:10:47.773 "claim_type": "exclusive_write", 00:10:47.773 "zoned": false, 00:10:47.773 "supported_io_types": { 00:10:47.773 "read": true, 00:10:47.773 "write": true, 00:10:47.773 "unmap": true, 00:10:47.773 "flush": true, 00:10:47.773 "reset": true, 00:10:47.773 "nvme_admin": false, 00:10:47.773 "nvme_io": false, 00:10:47.773 "nvme_io_md": false, 00:10:47.773 "write_zeroes": true, 00:10:47.773 "zcopy": true, 00:10:47.773 "get_zone_info": false, 00:10:47.773 "zone_management": false, 00:10:47.773 "zone_append": false, 00:10:47.773 "compare": false, 00:10:47.773 "compare_and_write": false, 00:10:47.773 "abort": true, 00:10:47.773 "seek_hole": false, 00:10:47.773 "seek_data": false, 00:10:47.773 "copy": true, 00:10:47.773 "nvme_iov_md": false 00:10:47.773 }, 00:10:47.773 "memory_domains": [ 00:10:47.773 { 00:10:47.773 "dma_device_id": "system", 00:10:47.773 "dma_device_type": 1 00:10:47.773 }, 00:10:47.773 { 00:10:47.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.773 "dma_device_type": 2 00:10:47.773 } 00:10:47.773 ], 00:10:47.773 "driver_specific": {} 00:10:47.773 }' 00:10:47.773 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:47.773 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.031 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.031 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.031 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.031 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.031 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.031 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.031 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.031 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.032 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.032 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.032 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.032 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:48.032 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.328 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.328 "name": "BaseBdev2", 00:10:48.328 "aliases": [ 00:10:48.328 "627ae23b-8b2b-43e6-9f49-a67954471bb7" 00:10:48.328 ], 00:10:48.328 "product_name": "Malloc disk", 00:10:48.328 "block_size": 512, 00:10:48.328 "num_blocks": 65536, 00:10:48.328 "uuid": "627ae23b-8b2b-43e6-9f49-a67954471bb7", 00:10:48.328 "assigned_rate_limits": { 00:10:48.328 "rw_ios_per_sec": 0, 00:10:48.328 "rw_mbytes_per_sec": 0, 00:10:48.328 "r_mbytes_per_sec": 0, 00:10:48.328 "w_mbytes_per_sec": 0 00:10:48.328 }, 00:10:48.328 "claimed": true, 00:10:48.328 "claim_type": "exclusive_write", 00:10:48.328 "zoned": false, 00:10:48.328 "supported_io_types": { 00:10:48.328 "read": true, 00:10:48.328 "write": true, 00:10:48.328 "unmap": true, 00:10:48.328 "flush": true, 00:10:48.328 "reset": true, 00:10:48.328 "nvme_admin": false, 00:10:48.328 "nvme_io": false, 00:10:48.328 "nvme_io_md": false, 00:10:48.328 "write_zeroes": true, 00:10:48.328 "zcopy": true, 00:10:48.328 "get_zone_info": false, 00:10:48.328 "zone_management": false, 00:10:48.328 "zone_append": false, 00:10:48.328 "compare": false, 00:10:48.328 "compare_and_write": false, 00:10:48.328 "abort": true, 00:10:48.328 "seek_hole": false, 00:10:48.328 "seek_data": false, 00:10:48.328 "copy": true, 00:10:48.328 "nvme_iov_md": false 00:10:48.328 }, 00:10:48.328 "memory_domains": [ 00:10:48.328 { 00:10:48.328 "dma_device_id": "system", 00:10:48.328 "dma_device_type": 1 00:10:48.328 }, 00:10:48.328 { 00:10:48.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.328 "dma_device_type": 2 00:10:48.328 } 00:10:48.328 ], 00:10:48.328 "driver_specific": {} 00:10:48.328 }' 00:10:48.328 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.328 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.588 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:48.847 [2024-07-25 11:52:34.911944] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.847 [2024-07-25 11:52:34.911966] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.847 [2024-07-25 11:52:34.912003] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.847 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.106 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.106 "name": "Existed_Raid", 00:10:49.106 "uuid": "e68de662-dd2d-4af4-bf1a-73c4ce59cc56", 00:10:49.106 "strip_size_kb": 64, 00:10:49.106 "state": "offline", 00:10:49.106 "raid_level": "raid0", 00:10:49.106 "superblock": true, 00:10:49.106 "num_base_bdevs": 2, 00:10:49.106 "num_base_bdevs_discovered": 1, 00:10:49.106 "num_base_bdevs_operational": 1, 00:10:49.106 "base_bdevs_list": [ 00:10:49.106 { 00:10:49.106 "name": null, 00:10:49.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.106 "is_configured": false, 00:10:49.106 "data_offset": 2048, 00:10:49.106 "data_size": 63488 00:10:49.106 }, 00:10:49.106 { 00:10:49.106 "name": "BaseBdev2", 00:10:49.106 "uuid": "627ae23b-8b2b-43e6-9f49-a67954471bb7", 00:10:49.106 "is_configured": true, 00:10:49.106 "data_offset": 2048, 00:10:49.106 "data_size": 63488 00:10:49.106 } 00:10:49.106 ] 00:10:49.106 }' 00:10:49.106 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.106 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.673 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:49.673 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:49.673 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.673 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:49.932 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:49.932 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.932 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:50.191 [2024-07-25 11:52:36.180273] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.191 [2024-07-25 11:52:36.180314] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ddd600 name Existed_Raid, state offline 00:10:50.191 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:50.191 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:50.191 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.191 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4094020 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4094020 ']' 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4094020 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4094020 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4094020' 00:10:50.450 killing process with pid 4094020 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4094020 00:10:50.450 [2024-07-25 11:52:36.494489] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.450 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4094020 00:10:50.450 [2024-07-25 11:52:36.495342] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.710 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:50.710 00:10:50.710 real 0m10.004s 00:10:50.710 user 0m17.742s 00:10:50.710 sys 0m1.914s 00:10:50.710 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.710 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.710 ************************************ 00:10:50.710 END TEST raid_state_function_test_sb 00:10:50.710 ************************************ 00:10:50.710 11:52:36 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:50.710 11:52:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:50.710 11:52:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.710 11:52:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.710 ************************************ 00:10:50.710 START TEST raid_superblock_test 00:10:50.710 ************************************ 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4095850 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4095850 /var/tmp/spdk-raid.sock 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4095850 ']' 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:50.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.710 11:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.969 [2024-07-25 11:52:36.835299] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:50.969 [2024-07-25 11:52:36.835358] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4095850 ] 00:10:50.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.969 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:50.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.969 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:50.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.969 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:50.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.969 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:50.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:50.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.970 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:50.970 [2024-07-25 11:52:36.967503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.970 [2024-07-25 11:52:37.053855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.229 [2024-07-25 11:52:37.120073] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.229 [2024-07-25 11:52:37.120104] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.797 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:52.056 malloc1 00:10:52.056 11:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.316 [2024-07-25 11:52:38.177505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.316 [2024-07-25 11:52:38.177547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.316 [2024-07-25 11:52:38.177565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df42f0 00:10:52.316 [2024-07-25 11:52:38.177576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.316 [2024-07-25 11:52:38.179083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.316 [2024-07-25 11:52:38.179111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.316 pt1 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:52.316 malloc2 00:10:52.316 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.575 [2024-07-25 11:52:38.623167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.575 [2024-07-25 11:52:38.623207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.575 [2024-07-25 11:52:38.623223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df56d0 00:10:52.575 [2024-07-25 11:52:38.623234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.575 [2024-07-25 11:52:38.624652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.575 [2024-07-25 11:52:38.624680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.575 pt2 00:10:52.575 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:52.575 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:52.575 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:52.835 [2024-07-25 11:52:38.847783] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.835 [2024-07-25 11:52:38.848977] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.835 [2024-07-25 11:52:38.849104] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f8e310 00:10:52.835 [2024-07-25 11:52:38.849117] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:52.835 [2024-07-25 11:52:38.849309] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8dce0 00:10:52.835 [2024-07-25 11:52:38.849438] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f8e310 00:10:52.835 [2024-07-25 11:52:38.849447] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f8e310 00:10:52.835 [2024-07-25 11:52:38.849538] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.835 11:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.094 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.094 "name": "raid_bdev1", 00:10:53.094 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:53.094 "strip_size_kb": 64, 00:10:53.094 "state": "online", 00:10:53.094 "raid_level": "raid0", 00:10:53.094 "superblock": true, 00:10:53.094 "num_base_bdevs": 2, 00:10:53.094 "num_base_bdevs_discovered": 2, 00:10:53.094 "num_base_bdevs_operational": 2, 00:10:53.094 "base_bdevs_list": [ 00:10:53.094 { 00:10:53.094 "name": "pt1", 00:10:53.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.094 "is_configured": true, 00:10:53.094 "data_offset": 2048, 00:10:53.094 "data_size": 63488 00:10:53.094 }, 00:10:53.094 { 00:10:53.094 "name": "pt2", 00:10:53.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.094 "is_configured": true, 00:10:53.094 "data_offset": 2048, 00:10:53.094 "data_size": 63488 00:10:53.094 } 00:10:53.094 ] 00:10:53.094 }' 00:10:53.094 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.094 11:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:53.663 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:53.922 [2024-07-25 11:52:39.886688] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.922 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:53.922 "name": "raid_bdev1", 00:10:53.922 "aliases": [ 00:10:53.922 "68fdbe6b-0f64-4935-a11c-88576d17b9db" 00:10:53.922 ], 00:10:53.922 "product_name": "Raid Volume", 00:10:53.922 "block_size": 512, 00:10:53.922 "num_blocks": 126976, 00:10:53.922 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:53.922 "assigned_rate_limits": { 00:10:53.922 "rw_ios_per_sec": 0, 00:10:53.922 "rw_mbytes_per_sec": 0, 00:10:53.922 "r_mbytes_per_sec": 0, 00:10:53.923 "w_mbytes_per_sec": 0 00:10:53.923 }, 00:10:53.923 "claimed": false, 00:10:53.923 "zoned": false, 00:10:53.923 "supported_io_types": { 00:10:53.923 "read": true, 00:10:53.923 "write": true, 00:10:53.923 "unmap": true, 00:10:53.923 "flush": true, 00:10:53.923 "reset": true, 00:10:53.923 "nvme_admin": false, 00:10:53.923 "nvme_io": false, 00:10:53.923 "nvme_io_md": false, 00:10:53.923 "write_zeroes": true, 00:10:53.923 "zcopy": false, 00:10:53.923 "get_zone_info": false, 00:10:53.923 "zone_management": false, 00:10:53.923 "zone_append": false, 00:10:53.923 "compare": false, 00:10:53.923 "compare_and_write": false, 00:10:53.923 "abort": false, 00:10:53.923 "seek_hole": false, 00:10:53.923 "seek_data": false, 00:10:53.923 "copy": false, 00:10:53.923 "nvme_iov_md": false 00:10:53.923 }, 00:10:53.923 "memory_domains": [ 00:10:53.923 { 00:10:53.923 "dma_device_id": "system", 00:10:53.923 "dma_device_type": 1 00:10:53.923 }, 00:10:53.923 { 00:10:53.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.923 "dma_device_type": 2 00:10:53.923 }, 00:10:53.923 { 00:10:53.923 "dma_device_id": "system", 00:10:53.923 "dma_device_type": 1 00:10:53.923 }, 00:10:53.923 { 00:10:53.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.923 "dma_device_type": 2 00:10:53.923 } 00:10:53.923 ], 00:10:53.923 "driver_specific": { 00:10:53.923 "raid": { 00:10:53.923 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:53.923 "strip_size_kb": 64, 00:10:53.923 "state": "online", 00:10:53.923 "raid_level": "raid0", 00:10:53.923 "superblock": true, 00:10:53.923 "num_base_bdevs": 2, 00:10:53.923 "num_base_bdevs_discovered": 2, 00:10:53.923 "num_base_bdevs_operational": 2, 00:10:53.923 "base_bdevs_list": [ 00:10:53.923 { 00:10:53.923 "name": "pt1", 00:10:53.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.923 "is_configured": true, 00:10:53.923 "data_offset": 2048, 00:10:53.923 "data_size": 63488 00:10:53.923 }, 00:10:53.923 { 00:10:53.923 "name": "pt2", 00:10:53.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.923 "is_configured": true, 00:10:53.923 "data_offset": 2048, 00:10:53.923 "data_size": 63488 00:10:53.923 } 00:10:53.923 ] 00:10:53.923 } 00:10:53.923 } 00:10:53.923 }' 00:10:53.923 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.923 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:53.923 pt2' 00:10:53.923 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:53.923 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:53.923 11:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:54.183 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:54.183 "name": "pt1", 00:10:54.183 "aliases": [ 00:10:54.183 "00000000-0000-0000-0000-000000000001" 00:10:54.183 ], 00:10:54.183 "product_name": "passthru", 00:10:54.183 "block_size": 512, 00:10:54.183 "num_blocks": 65536, 00:10:54.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.183 "assigned_rate_limits": { 00:10:54.183 "rw_ios_per_sec": 0, 00:10:54.183 "rw_mbytes_per_sec": 0, 00:10:54.183 "r_mbytes_per_sec": 0, 00:10:54.183 "w_mbytes_per_sec": 0 00:10:54.183 }, 00:10:54.183 "claimed": true, 00:10:54.183 "claim_type": "exclusive_write", 00:10:54.183 "zoned": false, 00:10:54.183 "supported_io_types": { 00:10:54.183 "read": true, 00:10:54.183 "write": true, 00:10:54.183 "unmap": true, 00:10:54.183 "flush": true, 00:10:54.183 "reset": true, 00:10:54.183 "nvme_admin": false, 00:10:54.183 "nvme_io": false, 00:10:54.183 "nvme_io_md": false, 00:10:54.183 "write_zeroes": true, 00:10:54.183 "zcopy": true, 00:10:54.183 "get_zone_info": false, 00:10:54.183 "zone_management": false, 00:10:54.183 "zone_append": false, 00:10:54.183 "compare": false, 00:10:54.183 "compare_and_write": false, 00:10:54.183 "abort": true, 00:10:54.183 "seek_hole": false, 00:10:54.183 "seek_data": false, 00:10:54.183 "copy": true, 00:10:54.183 "nvme_iov_md": false 00:10:54.183 }, 00:10:54.183 "memory_domains": [ 00:10:54.183 { 00:10:54.183 "dma_device_id": "system", 00:10:54.183 "dma_device_type": 1 00:10:54.183 }, 00:10:54.183 { 00:10:54.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.183 "dma_device_type": 2 00:10:54.183 } 00:10:54.183 ], 00:10:54.183 "driver_specific": { 00:10:54.183 "passthru": { 00:10:54.183 "name": "pt1", 00:10:54.183 "base_bdev_name": "malloc1" 00:10:54.183 } 00:10:54.183 } 00:10:54.183 }' 00:10:54.183 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.183 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.183 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:54.183 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:54.442 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:54.701 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:54.701 "name": "pt2", 00:10:54.701 "aliases": [ 00:10:54.701 "00000000-0000-0000-0000-000000000002" 00:10:54.701 ], 00:10:54.701 "product_name": "passthru", 00:10:54.701 "block_size": 512, 00:10:54.701 "num_blocks": 65536, 00:10:54.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.701 "assigned_rate_limits": { 00:10:54.701 "rw_ios_per_sec": 0, 00:10:54.701 "rw_mbytes_per_sec": 0, 00:10:54.701 "r_mbytes_per_sec": 0, 00:10:54.701 "w_mbytes_per_sec": 0 00:10:54.701 }, 00:10:54.701 "claimed": true, 00:10:54.701 "claim_type": "exclusive_write", 00:10:54.701 "zoned": false, 00:10:54.701 "supported_io_types": { 00:10:54.701 "read": true, 00:10:54.701 "write": true, 00:10:54.701 "unmap": true, 00:10:54.701 "flush": true, 00:10:54.701 "reset": true, 00:10:54.701 "nvme_admin": false, 00:10:54.701 "nvme_io": false, 00:10:54.701 "nvme_io_md": false, 00:10:54.701 "write_zeroes": true, 00:10:54.701 "zcopy": true, 00:10:54.701 "get_zone_info": false, 00:10:54.701 "zone_management": false, 00:10:54.701 "zone_append": false, 00:10:54.701 "compare": false, 00:10:54.701 "compare_and_write": false, 00:10:54.701 "abort": true, 00:10:54.701 "seek_hole": false, 00:10:54.701 "seek_data": false, 00:10:54.701 "copy": true, 00:10:54.701 "nvme_iov_md": false 00:10:54.701 }, 00:10:54.701 "memory_domains": [ 00:10:54.701 { 00:10:54.701 "dma_device_id": "system", 00:10:54.701 "dma_device_type": 1 00:10:54.701 }, 00:10:54.701 { 00:10:54.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.701 "dma_device_type": 2 00:10:54.701 } 00:10:54.701 ], 00:10:54.701 "driver_specific": { 00:10:54.701 "passthru": { 00:10:54.701 "name": "pt2", 00:10:54.701 "base_bdev_name": "malloc2" 00:10:54.701 } 00:10:54.701 } 00:10:54.701 }' 00:10:54.701 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.701 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:54.960 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.960 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.219 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:55.220 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:55.220 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:55.220 [2024-07-25 11:52:41.298407] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.220 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=68fdbe6b-0f64-4935-a11c-88576d17b9db 00:10:55.220 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 68fdbe6b-0f64-4935-a11c-88576d17b9db ']' 00:10:55.220 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:55.479 [2024-07-25 11:52:41.526778] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.479 [2024-07-25 11:52:41.526799] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.479 [2024-07-25 11:52:41.526848] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.479 [2024-07-25 11:52:41.526889] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.479 [2024-07-25 11:52:41.526900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f8e310 name raid_bdev1, state offline 00:10:55.479 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.479 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:55.739 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:55.739 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:55.739 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.739 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:55.998 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.998 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:56.257 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:56.257 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:56.517 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:56.776 [2024-07-25 11:52:42.649687] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:56.776 [2024-07-25 11:52:42.650908] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:56.776 [2024-07-25 11:52:42.650958] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:56.776 [2024-07-25 11:52:42.650994] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:56.776 [2024-07-25 11:52:42.651012] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.776 [2024-07-25 11:52:42.651020] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f973f0 name raid_bdev1, state configuring 00:10:56.776 request: 00:10:56.776 { 00:10:56.776 "name": "raid_bdev1", 00:10:56.776 "raid_level": "raid0", 00:10:56.776 "base_bdevs": [ 00:10:56.776 "malloc1", 00:10:56.776 "malloc2" 00:10:56.776 ], 00:10:56.776 "strip_size_kb": 64, 00:10:56.776 "superblock": false, 00:10:56.776 "method": "bdev_raid_create", 00:10:56.776 "req_id": 1 00:10:56.776 } 00:10:56.776 Got JSON-RPC error response 00:10:56.776 response: 00:10:56.776 { 00:10:56.776 "code": -17, 00:10:56.776 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:56.776 } 00:10:56.776 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:56.776 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:56.776 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:56.776 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:56.777 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.777 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:57.036 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:57.036 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:57.036 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.036 [2024-07-25 11:52:43.106828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.036 [2024-07-25 11:52:43.106865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.036 [2024-07-25 11:52:43.106881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f97d70 00:10:57.036 [2024-07-25 11:52:43.106893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.036 [2024-07-25 11:52:43.108303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.036 [2024-07-25 11:52:43.108329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.036 [2024-07-25 11:52:43.108387] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.036 [2024-07-25 11:52:43.108412] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.036 pt1 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.036 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.295 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.295 "name": "raid_bdev1", 00:10:57.295 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:57.295 "strip_size_kb": 64, 00:10:57.295 "state": "configuring", 00:10:57.295 "raid_level": "raid0", 00:10:57.295 "superblock": true, 00:10:57.295 "num_base_bdevs": 2, 00:10:57.295 "num_base_bdevs_discovered": 1, 00:10:57.295 "num_base_bdevs_operational": 2, 00:10:57.295 "base_bdevs_list": [ 00:10:57.295 { 00:10:57.295 "name": "pt1", 00:10:57.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.295 "is_configured": true, 00:10:57.295 "data_offset": 2048, 00:10:57.295 "data_size": 63488 00:10:57.295 }, 00:10:57.295 { 00:10:57.295 "name": null, 00:10:57.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.295 "is_configured": false, 00:10:57.295 "data_offset": 2048, 00:10:57.295 "data_size": 63488 00:10:57.295 } 00:10:57.295 ] 00:10:57.295 }' 00:10:57.295 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.295 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.862 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:57.862 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:57.862 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:57.862 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.121 [2024-07-25 11:52:44.137561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.121 [2024-07-25 11:52:44.137608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.121 [2024-07-25 11:52:44.137624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8ebb0 00:10:58.121 [2024-07-25 11:52:44.137635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.121 [2024-07-25 11:52:44.137947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.121 [2024-07-25 11:52:44.137965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.121 [2024-07-25 11:52:44.138021] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.121 [2024-07-25 11:52:44.138039] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.121 [2024-07-25 11:52:44.138125] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f8d120 00:10:58.121 [2024-07-25 11:52:44.138135] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:58.121 [2024-07-25 11:52:44.138298] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1df5960 00:10:58.121 [2024-07-25 11:52:44.138410] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f8d120 00:10:58.121 [2024-07-25 11:52:44.138420] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f8d120 00:10:58.121 [2024-07-25 11:52:44.138507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.121 pt2 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.121 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.380 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.380 "name": "raid_bdev1", 00:10:58.380 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:58.380 "strip_size_kb": 64, 00:10:58.380 "state": "online", 00:10:58.380 "raid_level": "raid0", 00:10:58.380 "superblock": true, 00:10:58.380 "num_base_bdevs": 2, 00:10:58.380 "num_base_bdevs_discovered": 2, 00:10:58.380 "num_base_bdevs_operational": 2, 00:10:58.380 "base_bdevs_list": [ 00:10:58.380 { 00:10:58.380 "name": "pt1", 00:10:58.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.380 "is_configured": true, 00:10:58.380 "data_offset": 2048, 00:10:58.380 "data_size": 63488 00:10:58.380 }, 00:10:58.380 { 00:10:58.380 "name": "pt2", 00:10:58.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.380 "is_configured": true, 00:10:58.380 "data_offset": 2048, 00:10:58.380 "data_size": 63488 00:10:58.380 } 00:10:58.380 ] 00:10:58.380 }' 00:10:58.380 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.380 11:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:58.948 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:59.207 [2024-07-25 11:52:45.116380] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.207 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:59.207 "name": "raid_bdev1", 00:10:59.207 "aliases": [ 00:10:59.207 "68fdbe6b-0f64-4935-a11c-88576d17b9db" 00:10:59.207 ], 00:10:59.207 "product_name": "Raid Volume", 00:10:59.207 "block_size": 512, 00:10:59.207 "num_blocks": 126976, 00:10:59.207 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:59.207 "assigned_rate_limits": { 00:10:59.207 "rw_ios_per_sec": 0, 00:10:59.207 "rw_mbytes_per_sec": 0, 00:10:59.207 "r_mbytes_per_sec": 0, 00:10:59.207 "w_mbytes_per_sec": 0 00:10:59.207 }, 00:10:59.207 "claimed": false, 00:10:59.207 "zoned": false, 00:10:59.207 "supported_io_types": { 00:10:59.207 "read": true, 00:10:59.207 "write": true, 00:10:59.207 "unmap": true, 00:10:59.207 "flush": true, 00:10:59.207 "reset": true, 00:10:59.207 "nvme_admin": false, 00:10:59.207 "nvme_io": false, 00:10:59.207 "nvme_io_md": false, 00:10:59.207 "write_zeroes": true, 00:10:59.207 "zcopy": false, 00:10:59.207 "get_zone_info": false, 00:10:59.207 "zone_management": false, 00:10:59.207 "zone_append": false, 00:10:59.207 "compare": false, 00:10:59.208 "compare_and_write": false, 00:10:59.208 "abort": false, 00:10:59.208 "seek_hole": false, 00:10:59.208 "seek_data": false, 00:10:59.208 "copy": false, 00:10:59.208 "nvme_iov_md": false 00:10:59.208 }, 00:10:59.208 "memory_domains": [ 00:10:59.208 { 00:10:59.208 "dma_device_id": "system", 00:10:59.208 "dma_device_type": 1 00:10:59.208 }, 00:10:59.208 { 00:10:59.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.208 "dma_device_type": 2 00:10:59.208 }, 00:10:59.208 { 00:10:59.208 "dma_device_id": "system", 00:10:59.208 "dma_device_type": 1 00:10:59.208 }, 00:10:59.208 { 00:10:59.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.208 "dma_device_type": 2 00:10:59.208 } 00:10:59.208 ], 00:10:59.208 "driver_specific": { 00:10:59.208 "raid": { 00:10:59.208 "uuid": "68fdbe6b-0f64-4935-a11c-88576d17b9db", 00:10:59.208 "strip_size_kb": 64, 00:10:59.208 "state": "online", 00:10:59.208 "raid_level": "raid0", 00:10:59.208 "superblock": true, 00:10:59.208 "num_base_bdevs": 2, 00:10:59.208 "num_base_bdevs_discovered": 2, 00:10:59.208 "num_base_bdevs_operational": 2, 00:10:59.208 "base_bdevs_list": [ 00:10:59.208 { 00:10:59.208 "name": "pt1", 00:10:59.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.208 "is_configured": true, 00:10:59.208 "data_offset": 2048, 00:10:59.208 "data_size": 63488 00:10:59.208 }, 00:10:59.208 { 00:10:59.208 "name": "pt2", 00:10:59.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.208 "is_configured": true, 00:10:59.208 "data_offset": 2048, 00:10:59.208 "data_size": 63488 00:10:59.208 } 00:10:59.208 ] 00:10:59.208 } 00:10:59.208 } 00:10:59.208 }' 00:10:59.208 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.208 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:59.208 pt2' 00:10:59.208 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:59.208 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:59.208 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:59.467 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:59.467 "name": "pt1", 00:10:59.467 "aliases": [ 00:10:59.467 "00000000-0000-0000-0000-000000000001" 00:10:59.467 ], 00:10:59.467 "product_name": "passthru", 00:10:59.467 "block_size": 512, 00:10:59.467 "num_blocks": 65536, 00:10:59.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.467 "assigned_rate_limits": { 00:10:59.467 "rw_ios_per_sec": 0, 00:10:59.467 "rw_mbytes_per_sec": 0, 00:10:59.467 "r_mbytes_per_sec": 0, 00:10:59.467 "w_mbytes_per_sec": 0 00:10:59.467 }, 00:10:59.467 "claimed": true, 00:10:59.467 "claim_type": "exclusive_write", 00:10:59.467 "zoned": false, 00:10:59.467 "supported_io_types": { 00:10:59.467 "read": true, 00:10:59.467 "write": true, 00:10:59.467 "unmap": true, 00:10:59.467 "flush": true, 00:10:59.467 "reset": true, 00:10:59.467 "nvme_admin": false, 00:10:59.467 "nvme_io": false, 00:10:59.467 "nvme_io_md": false, 00:10:59.467 "write_zeroes": true, 00:10:59.467 "zcopy": true, 00:10:59.467 "get_zone_info": false, 00:10:59.467 "zone_management": false, 00:10:59.467 "zone_append": false, 00:10:59.467 "compare": false, 00:10:59.467 "compare_and_write": false, 00:10:59.467 "abort": true, 00:10:59.467 "seek_hole": false, 00:10:59.467 "seek_data": false, 00:10:59.467 "copy": true, 00:10:59.467 "nvme_iov_md": false 00:10:59.467 }, 00:10:59.467 "memory_domains": [ 00:10:59.467 { 00:10:59.467 "dma_device_id": "system", 00:10:59.467 "dma_device_type": 1 00:10:59.467 }, 00:10:59.467 { 00:10:59.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.467 "dma_device_type": 2 00:10:59.467 } 00:10:59.467 ], 00:10:59.467 "driver_specific": { 00:10:59.467 "passthru": { 00:10:59.467 "name": "pt1", 00:10:59.467 "base_bdev_name": "malloc1" 00:10:59.467 } 00:10:59.467 } 00:10:59.467 }' 00:10:59.467 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.467 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.467 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:59.467 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:59.467 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:59.726 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:59.985 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:59.985 "name": "pt2", 00:10:59.985 "aliases": [ 00:10:59.985 "00000000-0000-0000-0000-000000000002" 00:10:59.985 ], 00:10:59.985 "product_name": "passthru", 00:10:59.985 "block_size": 512, 00:10:59.985 "num_blocks": 65536, 00:10:59.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.985 "assigned_rate_limits": { 00:10:59.985 "rw_ios_per_sec": 0, 00:10:59.985 "rw_mbytes_per_sec": 0, 00:10:59.985 "r_mbytes_per_sec": 0, 00:10:59.985 "w_mbytes_per_sec": 0 00:10:59.985 }, 00:10:59.985 "claimed": true, 00:10:59.985 "claim_type": "exclusive_write", 00:10:59.985 "zoned": false, 00:10:59.985 "supported_io_types": { 00:10:59.985 "read": true, 00:10:59.985 "write": true, 00:10:59.985 "unmap": true, 00:10:59.985 "flush": true, 00:10:59.985 "reset": true, 00:10:59.985 "nvme_admin": false, 00:10:59.985 "nvme_io": false, 00:10:59.985 "nvme_io_md": false, 00:10:59.985 "write_zeroes": true, 00:10:59.985 "zcopy": true, 00:10:59.985 "get_zone_info": false, 00:10:59.985 "zone_management": false, 00:10:59.985 "zone_append": false, 00:10:59.985 "compare": false, 00:10:59.985 "compare_and_write": false, 00:10:59.985 "abort": true, 00:10:59.985 "seek_hole": false, 00:10:59.985 "seek_data": false, 00:10:59.985 "copy": true, 00:10:59.985 "nvme_iov_md": false 00:10:59.985 }, 00:10:59.985 "memory_domains": [ 00:10:59.985 { 00:10:59.985 "dma_device_id": "system", 00:10:59.985 "dma_device_type": 1 00:10:59.985 }, 00:10:59.985 { 00:10:59.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.985 "dma_device_type": 2 00:10:59.985 } 00:10:59.985 ], 00:10:59.985 "driver_specific": { 00:10:59.985 "passthru": { 00:10:59.985 "name": "pt2", 00:10:59.985 "base_bdev_name": "malloc2" 00:10:59.985 } 00:10:59.985 } 00:10:59.985 }' 00:10:59.985 11:52:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.985 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.985 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:59.985 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:00.244 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:00.503 [2024-07-25 11:52:46.528087] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 68fdbe6b-0f64-4935-a11c-88576d17b9db '!=' 68fdbe6b-0f64-4935-a11c-88576d17b9db ']' 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4095850 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4095850 ']' 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4095850 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4095850 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4095850' 00:11:00.503 killing process with pid 4095850 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4095850 00:11:00.503 [2024-07-25 11:52:46.610983] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.503 [2024-07-25 11:52:46.611032] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.503 [2024-07-25 11:52:46.611070] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.503 [2024-07-25 11:52:46.611086] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f8d120 name raid_bdev1, state offline 00:11:00.503 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4095850 00:11:00.762 [2024-07-25 11:52:46.626886] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.762 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:00.762 00:11:00.762 real 0m10.042s 00:11:00.762 user 0m17.895s 00:11:00.762 sys 0m1.897s 00:11:00.762 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.762 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.762 ************************************ 00:11:00.762 END TEST raid_superblock_test 00:11:00.762 ************************************ 00:11:00.762 11:52:46 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:00.762 11:52:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:00.762 11:52:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.762 11:52:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.083 ************************************ 00:11:01.083 START TEST raid_read_error_test 00:11:01.083 ************************************ 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.89zJEvsWe2 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4097908 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4097908 /var/tmp/spdk-raid.sock 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4097908 ']' 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:01.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.083 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.083 [2024-07-25 11:52:46.951507] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:01.083 [2024-07-25 11:52:46.951563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097908 ] 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.083 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:01.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.084 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:01.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.084 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:01.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.084 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:01.084 [2024-07-25 11:52:47.083838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.084 [2024-07-25 11:52:47.171550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.342 [2024-07-25 11:52:47.232847] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.342 [2024-07-25 11:52:47.232900] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.908 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.908 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:01.908 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:01.908 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:02.166 BaseBdev1_malloc 00:11:02.166 11:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:02.166 true 00:11:02.425 11:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:02.425 [2024-07-25 11:52:48.489771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:02.425 [2024-07-25 11:52:48.489811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.425 [2024-07-25 11:52:48.489829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca5190 00:11:02.425 [2024-07-25 11:52:48.489840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.425 [2024-07-25 11:52:48.491395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.425 [2024-07-25 11:52:48.491423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:02.425 BaseBdev1 00:11:02.425 11:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:02.425 11:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:02.991 BaseBdev2_malloc 00:11:02.991 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:03.249 true 00:11:03.249 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.508 [2024-07-25 11:52:49.440488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.508 [2024-07-25 11:52:49.440527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.508 [2024-07-25 11:52:49.440545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca9e20 00:11:03.508 [2024-07-25 11:52:49.440556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.508 [2024-07-25 11:52:49.441949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.508 [2024-07-25 11:52:49.441975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.508 BaseBdev2 00:11:03.508 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:04.076 [2024-07-25 11:52:49.933795] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.076 [2024-07-25 11:52:49.934975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.076 [2024-07-25 11:52:49.935162] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1caba50 00:11:04.076 [2024-07-25 11:52:49.935175] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:04.076 [2024-07-25 11:52:49.935351] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b00070 00:11:04.076 [2024-07-25 11:52:49.935489] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1caba50 00:11:04.076 [2024-07-25 11:52:49.935499] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1caba50 00:11:04.076 [2024-07-25 11:52:49.935595] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.076 11:52:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.076 11:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.076 "name": "raid_bdev1", 00:11:04.076 "uuid": "3abcee64-2c4b-4b8a-81fc-00de3933e633", 00:11:04.076 "strip_size_kb": 64, 00:11:04.076 "state": "online", 00:11:04.076 "raid_level": "raid0", 00:11:04.076 "superblock": true, 00:11:04.076 "num_base_bdevs": 2, 00:11:04.076 "num_base_bdevs_discovered": 2, 00:11:04.076 "num_base_bdevs_operational": 2, 00:11:04.076 "base_bdevs_list": [ 00:11:04.076 { 00:11:04.076 "name": "BaseBdev1", 00:11:04.076 "uuid": "ab076199-eb81-5f6e-847d-f0c9c7856084", 00:11:04.076 "is_configured": true, 00:11:04.076 "data_offset": 2048, 00:11:04.076 "data_size": 63488 00:11:04.076 }, 00:11:04.076 { 00:11:04.076 "name": "BaseBdev2", 00:11:04.076 "uuid": "951b36e4-d917-502f-8554-e291308c2d4d", 00:11:04.076 "is_configured": true, 00:11:04.076 "data_offset": 2048, 00:11:04.076 "data_size": 63488 00:11:04.076 } 00:11:04.076 ] 00:11:04.076 }' 00:11:04.076 11:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.076 11:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.642 11:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:04.642 11:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:04.902 [2024-07-25 11:52:50.812352] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ca6a80 00:11:05.839 11:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.409 "name": "raid_bdev1", 00:11:06.409 "uuid": "3abcee64-2c4b-4b8a-81fc-00de3933e633", 00:11:06.409 "strip_size_kb": 64, 00:11:06.409 "state": "online", 00:11:06.409 "raid_level": "raid0", 00:11:06.409 "superblock": true, 00:11:06.409 "num_base_bdevs": 2, 00:11:06.409 "num_base_bdevs_discovered": 2, 00:11:06.409 "num_base_bdevs_operational": 2, 00:11:06.409 "base_bdevs_list": [ 00:11:06.409 { 00:11:06.409 "name": "BaseBdev1", 00:11:06.409 "uuid": "ab076199-eb81-5f6e-847d-f0c9c7856084", 00:11:06.409 "is_configured": true, 00:11:06.409 "data_offset": 2048, 00:11:06.409 "data_size": 63488 00:11:06.409 }, 00:11:06.409 { 00:11:06.409 "name": "BaseBdev2", 00:11:06.409 "uuid": "951b36e4-d917-502f-8554-e291308c2d4d", 00:11:06.409 "is_configured": true, 00:11:06.409 "data_offset": 2048, 00:11:06.409 "data_size": 63488 00:11:06.409 } 00:11:06.409 ] 00:11:06.409 }' 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.409 11:52:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.977 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:07.237 [2024-07-25 11:52:53.192728] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.237 [2024-07-25 11:52:53.192763] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.237 [2024-07-25 11:52:53.195664] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.237 [2024-07-25 11:52:53.195694] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.237 [2024-07-25 11:52:53.195720] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.237 [2024-07-25 11:52:53.195730] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1caba50 name raid_bdev1, state offline 00:11:07.237 0 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4097908 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4097908 ']' 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4097908 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4097908 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4097908' 00:11:07.237 killing process with pid 4097908 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4097908 00:11:07.237 [2024-07-25 11:52:53.266167] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.237 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4097908 00:11:07.237 [2024-07-25 11:52:53.275520] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.89zJEvsWe2 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:11:07.497 00:11:07.497 real 0m6.587s 00:11:07.497 user 0m10.429s 00:11:07.497 sys 0m1.143s 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.497 11:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.497 ************************************ 00:11:07.497 END TEST raid_read_error_test 00:11:07.497 ************************************ 00:11:07.497 11:52:53 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:11:07.497 11:52:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:07.497 11:52:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.497 11:52:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.497 ************************************ 00:11:07.497 START TEST raid_write_error_test 00:11:07.497 ************************************ 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.F3cHj7wsqw 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4099077 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4099077 /var/tmp/spdk-raid.sock 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4099077 ']' 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:07.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.497 11:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.756 [2024-07-25 11:52:53.642930] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:07.756 [2024-07-25 11:52:53.642992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099077 ] 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:07.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.756 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:07.756 [2024-07-25 11:52:53.774163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.756 [2024-07-25 11:52:53.861095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.015 [2024-07-25 11:52:53.914521] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.015 [2024-07-25 11:52:53.914547] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.584 11:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.584 11:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:08.584 11:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:08.584 11:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.154 BaseBdev1_malloc 00:11:09.154 11:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:09.154 true 00:11:09.413 11:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.672 [2024-07-25 11:52:55.760066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.672 [2024-07-25 11:52:55.760106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.672 [2024-07-25 11:52:55.760124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168a190 00:11:09.672 [2024-07-25 11:52:55.760136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.672 [2024-07-25 11:52:55.761790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.672 [2024-07-25 11:52:55.761818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.672 BaseBdev1 00:11:09.931 11:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:09.931 11:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.189 BaseBdev2_malloc 00:11:10.189 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:10.447 true 00:11:10.447 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.706 [2024-07-25 11:52:56.682797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.706 [2024-07-25 11:52:56.682835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.706 [2024-07-25 11:52:56.682852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168ee20 00:11:10.706 [2024-07-25 11:52:56.682869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.706 [2024-07-25 11:52:56.684151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.706 [2024-07-25 11:52:56.684177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.706 BaseBdev2 00:11:10.706 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:10.966 [2024-07-25 11:52:56.927469] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.966 [2024-07-25 11:52:56.928641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.966 [2024-07-25 11:52:56.928815] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1690a50 00:11:10.966 [2024-07-25 11:52:56.928827] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:10.966 [2024-07-25 11:52:56.928997] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14e5070 00:11:10.966 [2024-07-25 11:52:56.929132] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1690a50 00:11:10.966 [2024-07-25 11:52:56.929150] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1690a50 00:11:10.966 [2024-07-25 11:52:56.929243] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.966 11:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.225 11:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.225 "name": "raid_bdev1", 00:11:11.225 "uuid": "07602ea9-775d-4e80-82f2-1171594695ba", 00:11:11.225 "strip_size_kb": 64, 00:11:11.225 "state": "online", 00:11:11.225 "raid_level": "raid0", 00:11:11.225 "superblock": true, 00:11:11.225 "num_base_bdevs": 2, 00:11:11.225 "num_base_bdevs_discovered": 2, 00:11:11.225 "num_base_bdevs_operational": 2, 00:11:11.225 "base_bdevs_list": [ 00:11:11.225 { 00:11:11.225 "name": "BaseBdev1", 00:11:11.225 "uuid": "fbf627e4-6f85-54f3-bb1d-0527225e4b07", 00:11:11.225 "is_configured": true, 00:11:11.225 "data_offset": 2048, 00:11:11.225 "data_size": 63488 00:11:11.225 }, 00:11:11.225 { 00:11:11.225 "name": "BaseBdev2", 00:11:11.225 "uuid": "ff35b08e-c177-5639-9a9b-4f76b249b869", 00:11:11.225 "is_configured": true, 00:11:11.225 "data_offset": 2048, 00:11:11.225 "data_size": 63488 00:11:11.225 } 00:11:11.225 ] 00:11:11.225 }' 00:11:11.225 11:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.225 11:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.794 11:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:11.794 11:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:11.794 [2024-07-25 11:52:57.838096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x168ba80 00:11:12.732 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.991 11:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.250 11:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.250 "name": "raid_bdev1", 00:11:13.250 "uuid": "07602ea9-775d-4e80-82f2-1171594695ba", 00:11:13.250 "strip_size_kb": 64, 00:11:13.250 "state": "online", 00:11:13.250 "raid_level": "raid0", 00:11:13.250 "superblock": true, 00:11:13.250 "num_base_bdevs": 2, 00:11:13.250 "num_base_bdevs_discovered": 2, 00:11:13.250 "num_base_bdevs_operational": 2, 00:11:13.250 "base_bdevs_list": [ 00:11:13.250 { 00:11:13.250 "name": "BaseBdev1", 00:11:13.250 "uuid": "fbf627e4-6f85-54f3-bb1d-0527225e4b07", 00:11:13.250 "is_configured": true, 00:11:13.250 "data_offset": 2048, 00:11:13.250 "data_size": 63488 00:11:13.250 }, 00:11:13.250 { 00:11:13.250 "name": "BaseBdev2", 00:11:13.250 "uuid": "ff35b08e-c177-5639-9a9b-4f76b249b869", 00:11:13.250 "is_configured": true, 00:11:13.250 "data_offset": 2048, 00:11:13.250 "data_size": 63488 00:11:13.250 } 00:11:13.250 ] 00:11:13.250 }' 00:11:13.250 11:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.250 11:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.817 11:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:14.077 [2024-07-25 11:52:59.988220] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.077 [2024-07-25 11:52:59.988257] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.077 [2024-07-25 11:52:59.991194] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.077 [2024-07-25 11:52:59.991224] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.077 [2024-07-25 11:52:59.991250] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.077 [2024-07-25 11:52:59.991260] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1690a50 name raid_bdev1, state offline 00:11:14.077 0 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4099077 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4099077 ']' 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4099077 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4099077 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4099077' 00:11:14.077 killing process with pid 4099077 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4099077 00:11:14.077 [2024-07-25 11:53:00.067414] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.077 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4099077 00:11:14.077 [2024-07-25 11:53:00.077402] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.F3cHj7wsqw 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:11:14.337 00:11:14.337 real 0m6.712s 00:11:14.337 user 0m10.705s 00:11:14.337 sys 0m1.110s 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.337 11:53:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.337 ************************************ 00:11:14.337 END TEST raid_write_error_test 00:11:14.337 ************************************ 00:11:14.337 11:53:00 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:14.337 11:53:00 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:14.337 11:53:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:14.337 11:53:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.337 11:53:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.337 ************************************ 00:11:14.337 START TEST raid_state_function_test 00:11:14.337 ************************************ 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4100240 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4100240' 00:11:14.337 Process raid pid: 4100240 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4100240 /var/tmp/spdk-raid.sock 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4100240 ']' 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:14.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.337 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.337 [2024-07-25 11:53:00.432761] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:14.337 [2024-07-25 11:53:00.432819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:14.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:14.659 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:14.659 [2024-07-25 11:53:00.563736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.659 [2024-07-25 11:53:00.647770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.659 [2024-07-25 11:53:00.706167] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.659 [2024-07-25 11:53:00.706190] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.228 11:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.228 11:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:15.228 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:15.487 [2024-07-25 11:53:01.531702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.487 [2024-07-25 11:53:01.531739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.487 [2024-07-25 11:53:01.531749] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.487 [2024-07-25 11:53:01.531760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.487 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.746 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.746 "name": "Existed_Raid", 00:11:15.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.746 "strip_size_kb": 64, 00:11:15.746 "state": "configuring", 00:11:15.746 "raid_level": "concat", 00:11:15.746 "superblock": false, 00:11:15.746 "num_base_bdevs": 2, 00:11:15.746 "num_base_bdevs_discovered": 0, 00:11:15.746 "num_base_bdevs_operational": 2, 00:11:15.746 "base_bdevs_list": [ 00:11:15.746 { 00:11:15.746 "name": "BaseBdev1", 00:11:15.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.746 "is_configured": false, 00:11:15.746 "data_offset": 0, 00:11:15.746 "data_size": 0 00:11:15.746 }, 00:11:15.746 { 00:11:15.746 "name": "BaseBdev2", 00:11:15.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.746 "is_configured": false, 00:11:15.746 "data_offset": 0, 00:11:15.746 "data_size": 0 00:11:15.746 } 00:11:15.746 ] 00:11:15.746 }' 00:11:15.746 11:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.746 11:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 11:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:16.575 [2024-07-25 11:53:02.558433] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.575 [2024-07-25 11:53:02.558465] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2085f20 name Existed_Raid, state configuring 00:11:16.575 11:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:16.835 [2024-07-25 11:53:02.783026] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.835 [2024-07-25 11:53:02.783054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.835 [2024-07-25 11:53:02.783063] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.835 [2024-07-25 11:53:02.783074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.835 11:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.094 [2024-07-25 11:53:03.017170] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.094 BaseBdev1 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.094 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.354 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.354 [ 00:11:17.354 { 00:11:17.354 "name": "BaseBdev1", 00:11:17.354 "aliases": [ 00:11:17.354 "971f0097-412a-411d-bb82-a18758f7309a" 00:11:17.354 ], 00:11:17.354 "product_name": "Malloc disk", 00:11:17.354 "block_size": 512, 00:11:17.354 "num_blocks": 65536, 00:11:17.354 "uuid": "971f0097-412a-411d-bb82-a18758f7309a", 00:11:17.355 "assigned_rate_limits": { 00:11:17.355 "rw_ios_per_sec": 0, 00:11:17.355 "rw_mbytes_per_sec": 0, 00:11:17.355 "r_mbytes_per_sec": 0, 00:11:17.355 "w_mbytes_per_sec": 0 00:11:17.355 }, 00:11:17.355 "claimed": true, 00:11:17.355 "claim_type": "exclusive_write", 00:11:17.355 "zoned": false, 00:11:17.355 "supported_io_types": { 00:11:17.355 "read": true, 00:11:17.355 "write": true, 00:11:17.355 "unmap": true, 00:11:17.355 "flush": true, 00:11:17.355 "reset": true, 00:11:17.355 "nvme_admin": false, 00:11:17.355 "nvme_io": false, 00:11:17.355 "nvme_io_md": false, 00:11:17.355 "write_zeroes": true, 00:11:17.355 "zcopy": true, 00:11:17.355 "get_zone_info": false, 00:11:17.355 "zone_management": false, 00:11:17.355 "zone_append": false, 00:11:17.355 "compare": false, 00:11:17.355 "compare_and_write": false, 00:11:17.355 "abort": true, 00:11:17.355 "seek_hole": false, 00:11:17.355 "seek_data": false, 00:11:17.355 "copy": true, 00:11:17.355 "nvme_iov_md": false 00:11:17.355 }, 00:11:17.355 "memory_domains": [ 00:11:17.355 { 00:11:17.355 "dma_device_id": "system", 00:11:17.355 "dma_device_type": 1 00:11:17.355 }, 00:11:17.355 { 00:11:17.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.355 "dma_device_type": 2 00:11:17.355 } 00:11:17.355 ], 00:11:17.355 "driver_specific": {} 00:11:17.355 } 00:11:17.355 ] 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:17.614 "name": "Existed_Raid", 00:11:17.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.614 "strip_size_kb": 64, 00:11:17.614 "state": "configuring", 00:11:17.614 "raid_level": "concat", 00:11:17.614 "superblock": false, 00:11:17.614 "num_base_bdevs": 2, 00:11:17.614 "num_base_bdevs_discovered": 1, 00:11:17.614 "num_base_bdevs_operational": 2, 00:11:17.614 "base_bdevs_list": [ 00:11:17.614 { 00:11:17.614 "name": "BaseBdev1", 00:11:17.614 "uuid": "971f0097-412a-411d-bb82-a18758f7309a", 00:11:17.614 "is_configured": true, 00:11:17.614 "data_offset": 0, 00:11:17.614 "data_size": 65536 00:11:17.614 }, 00:11:17.614 { 00:11:17.614 "name": "BaseBdev2", 00:11:17.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.614 "is_configured": false, 00:11:17.614 "data_offset": 0, 00:11:17.614 "data_size": 0 00:11:17.614 } 00:11:17.614 ] 00:11:17.614 }' 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:17.614 11:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.183 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:18.442 [2024-07-25 11:53:04.485025] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.442 [2024-07-25 11:53:04.485060] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2085810 name Existed_Raid, state configuring 00:11:18.442 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:18.702 [2024-07-25 11:53:04.713658] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.702 [2024-07-25 11:53:04.715059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.702 [2024-07-25 11:53:04.715093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.702 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.961 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.961 "name": "Existed_Raid", 00:11:18.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.961 "strip_size_kb": 64, 00:11:18.961 "state": "configuring", 00:11:18.961 "raid_level": "concat", 00:11:18.961 "superblock": false, 00:11:18.961 "num_base_bdevs": 2, 00:11:18.961 "num_base_bdevs_discovered": 1, 00:11:18.961 "num_base_bdevs_operational": 2, 00:11:18.962 "base_bdevs_list": [ 00:11:18.962 { 00:11:18.962 "name": "BaseBdev1", 00:11:18.962 "uuid": "971f0097-412a-411d-bb82-a18758f7309a", 00:11:18.962 "is_configured": true, 00:11:18.962 "data_offset": 0, 00:11:18.962 "data_size": 65536 00:11:18.962 }, 00:11:18.962 { 00:11:18.962 "name": "BaseBdev2", 00:11:18.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.962 "is_configured": false, 00:11:18.962 "data_offset": 0, 00:11:18.962 "data_size": 0 00:11:18.962 } 00:11:18.962 ] 00:11:18.962 }' 00:11:18.962 11:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.962 11:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.530 11:53:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.790 [2024-07-25 11:53:05.719449] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.790 [2024-07-25 11:53:05.719482] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2086600 00:11:19.790 [2024-07-25 11:53:05.719489] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:19.790 [2024-07-25 11:53:05.719670] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x207f0e0 00:11:19.790 [2024-07-25 11:53:05.719780] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2086600 00:11:19.790 [2024-07-25 11:53:05.719789] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2086600 00:11:19.790 [2024-07-25 11:53:05.719942] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.790 BaseBdev2 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:19.790 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:20.049 11:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.309 [ 00:11:20.309 { 00:11:20.309 "name": "BaseBdev2", 00:11:20.309 "aliases": [ 00:11:20.309 "75a9f05e-58ed-41c9-a902-c045f6f7d6fe" 00:11:20.309 ], 00:11:20.309 "product_name": "Malloc disk", 00:11:20.309 "block_size": 512, 00:11:20.309 "num_blocks": 65536, 00:11:20.309 "uuid": "75a9f05e-58ed-41c9-a902-c045f6f7d6fe", 00:11:20.309 "assigned_rate_limits": { 00:11:20.309 "rw_ios_per_sec": 0, 00:11:20.309 "rw_mbytes_per_sec": 0, 00:11:20.309 "r_mbytes_per_sec": 0, 00:11:20.309 "w_mbytes_per_sec": 0 00:11:20.309 }, 00:11:20.309 "claimed": true, 00:11:20.309 "claim_type": "exclusive_write", 00:11:20.309 "zoned": false, 00:11:20.309 "supported_io_types": { 00:11:20.309 "read": true, 00:11:20.309 "write": true, 00:11:20.309 "unmap": true, 00:11:20.309 "flush": true, 00:11:20.309 "reset": true, 00:11:20.309 "nvme_admin": false, 00:11:20.309 "nvme_io": false, 00:11:20.309 "nvme_io_md": false, 00:11:20.309 "write_zeroes": true, 00:11:20.309 "zcopy": true, 00:11:20.309 "get_zone_info": false, 00:11:20.309 "zone_management": false, 00:11:20.309 "zone_append": false, 00:11:20.309 "compare": false, 00:11:20.309 "compare_and_write": false, 00:11:20.309 "abort": true, 00:11:20.309 "seek_hole": false, 00:11:20.309 "seek_data": false, 00:11:20.309 "copy": true, 00:11:20.309 "nvme_iov_md": false 00:11:20.309 }, 00:11:20.309 "memory_domains": [ 00:11:20.309 { 00:11:20.309 "dma_device_id": "system", 00:11:20.309 "dma_device_type": 1 00:11:20.309 }, 00:11:20.309 { 00:11:20.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.309 "dma_device_type": 2 00:11:20.309 } 00:11:20.309 ], 00:11:20.309 "driver_specific": {} 00:11:20.309 } 00:11:20.309 ] 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.309 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.309 "name": "Existed_Raid", 00:11:20.309 "uuid": "6cd3b9b5-9720-4d66-afd8-d754e5ad9487", 00:11:20.309 "strip_size_kb": 64, 00:11:20.309 "state": "online", 00:11:20.309 "raid_level": "concat", 00:11:20.309 "superblock": false, 00:11:20.309 "num_base_bdevs": 2, 00:11:20.309 "num_base_bdevs_discovered": 2, 00:11:20.309 "num_base_bdevs_operational": 2, 00:11:20.309 "base_bdevs_list": [ 00:11:20.309 { 00:11:20.309 "name": "BaseBdev1", 00:11:20.310 "uuid": "971f0097-412a-411d-bb82-a18758f7309a", 00:11:20.310 "is_configured": true, 00:11:20.310 "data_offset": 0, 00:11:20.310 "data_size": 65536 00:11:20.310 }, 00:11:20.310 { 00:11:20.310 "name": "BaseBdev2", 00:11:20.310 "uuid": "75a9f05e-58ed-41c9-a902-c045f6f7d6fe", 00:11:20.310 "is_configured": true, 00:11:20.310 "data_offset": 0, 00:11:20.310 "data_size": 65536 00:11:20.310 } 00:11:20.310 ] 00:11:20.310 }' 00:11:20.310 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.310 11:53:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:20.879 11:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:21.138 [2024-07-25 11:53:07.167507] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.138 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:21.138 "name": "Existed_Raid", 00:11:21.138 "aliases": [ 00:11:21.139 "6cd3b9b5-9720-4d66-afd8-d754e5ad9487" 00:11:21.139 ], 00:11:21.139 "product_name": "Raid Volume", 00:11:21.139 "block_size": 512, 00:11:21.139 "num_blocks": 131072, 00:11:21.139 "uuid": "6cd3b9b5-9720-4d66-afd8-d754e5ad9487", 00:11:21.139 "assigned_rate_limits": { 00:11:21.139 "rw_ios_per_sec": 0, 00:11:21.139 "rw_mbytes_per_sec": 0, 00:11:21.139 "r_mbytes_per_sec": 0, 00:11:21.139 "w_mbytes_per_sec": 0 00:11:21.139 }, 00:11:21.139 "claimed": false, 00:11:21.139 "zoned": false, 00:11:21.139 "supported_io_types": { 00:11:21.139 "read": true, 00:11:21.139 "write": true, 00:11:21.139 "unmap": true, 00:11:21.139 "flush": true, 00:11:21.139 "reset": true, 00:11:21.139 "nvme_admin": false, 00:11:21.139 "nvme_io": false, 00:11:21.139 "nvme_io_md": false, 00:11:21.139 "write_zeroes": true, 00:11:21.139 "zcopy": false, 00:11:21.139 "get_zone_info": false, 00:11:21.139 "zone_management": false, 00:11:21.139 "zone_append": false, 00:11:21.139 "compare": false, 00:11:21.139 "compare_and_write": false, 00:11:21.139 "abort": false, 00:11:21.139 "seek_hole": false, 00:11:21.139 "seek_data": false, 00:11:21.139 "copy": false, 00:11:21.139 "nvme_iov_md": false 00:11:21.139 }, 00:11:21.139 "memory_domains": [ 00:11:21.139 { 00:11:21.139 "dma_device_id": "system", 00:11:21.139 "dma_device_type": 1 00:11:21.139 }, 00:11:21.139 { 00:11:21.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.139 "dma_device_type": 2 00:11:21.139 }, 00:11:21.139 { 00:11:21.139 "dma_device_id": "system", 00:11:21.139 "dma_device_type": 1 00:11:21.139 }, 00:11:21.139 { 00:11:21.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.139 "dma_device_type": 2 00:11:21.139 } 00:11:21.139 ], 00:11:21.139 "driver_specific": { 00:11:21.139 "raid": { 00:11:21.139 "uuid": "6cd3b9b5-9720-4d66-afd8-d754e5ad9487", 00:11:21.139 "strip_size_kb": 64, 00:11:21.139 "state": "online", 00:11:21.139 "raid_level": "concat", 00:11:21.139 "superblock": false, 00:11:21.139 "num_base_bdevs": 2, 00:11:21.139 "num_base_bdevs_discovered": 2, 00:11:21.139 "num_base_bdevs_operational": 2, 00:11:21.139 "base_bdevs_list": [ 00:11:21.139 { 00:11:21.139 "name": "BaseBdev1", 00:11:21.139 "uuid": "971f0097-412a-411d-bb82-a18758f7309a", 00:11:21.139 "is_configured": true, 00:11:21.139 "data_offset": 0, 00:11:21.139 "data_size": 65536 00:11:21.139 }, 00:11:21.139 { 00:11:21.139 "name": "BaseBdev2", 00:11:21.139 "uuid": "75a9f05e-58ed-41c9-a902-c045f6f7d6fe", 00:11:21.139 "is_configured": true, 00:11:21.139 "data_offset": 0, 00:11:21.139 "data_size": 65536 00:11:21.139 } 00:11:21.139 ] 00:11:21.139 } 00:11:21.139 } 00:11:21.139 }' 00:11:21.139 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.139 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:21.139 BaseBdev2' 00:11:21.139 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:21.139 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:21.139 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.398 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.398 "name": "BaseBdev1", 00:11:21.398 "aliases": [ 00:11:21.398 "971f0097-412a-411d-bb82-a18758f7309a" 00:11:21.398 ], 00:11:21.398 "product_name": "Malloc disk", 00:11:21.398 "block_size": 512, 00:11:21.398 "num_blocks": 65536, 00:11:21.398 "uuid": "971f0097-412a-411d-bb82-a18758f7309a", 00:11:21.398 "assigned_rate_limits": { 00:11:21.398 "rw_ios_per_sec": 0, 00:11:21.398 "rw_mbytes_per_sec": 0, 00:11:21.398 "r_mbytes_per_sec": 0, 00:11:21.398 "w_mbytes_per_sec": 0 00:11:21.398 }, 00:11:21.398 "claimed": true, 00:11:21.398 "claim_type": "exclusive_write", 00:11:21.398 "zoned": false, 00:11:21.398 "supported_io_types": { 00:11:21.398 "read": true, 00:11:21.398 "write": true, 00:11:21.398 "unmap": true, 00:11:21.398 "flush": true, 00:11:21.398 "reset": true, 00:11:21.398 "nvme_admin": false, 00:11:21.398 "nvme_io": false, 00:11:21.398 "nvme_io_md": false, 00:11:21.398 "write_zeroes": true, 00:11:21.398 "zcopy": true, 00:11:21.398 "get_zone_info": false, 00:11:21.398 "zone_management": false, 00:11:21.398 "zone_append": false, 00:11:21.398 "compare": false, 00:11:21.398 "compare_and_write": false, 00:11:21.398 "abort": true, 00:11:21.398 "seek_hole": false, 00:11:21.398 "seek_data": false, 00:11:21.398 "copy": true, 00:11:21.398 "nvme_iov_md": false 00:11:21.398 }, 00:11:21.398 "memory_domains": [ 00:11:21.398 { 00:11:21.398 "dma_device_id": "system", 00:11:21.398 "dma_device_type": 1 00:11:21.398 }, 00:11:21.399 { 00:11:21.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.399 "dma_device_type": 2 00:11:21.399 } 00:11:21.399 ], 00:11:21.399 "driver_specific": {} 00:11:21.399 }' 00:11:21.399 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.399 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:21.658 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.917 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.917 "name": "BaseBdev2", 00:11:21.917 "aliases": [ 00:11:21.917 "75a9f05e-58ed-41c9-a902-c045f6f7d6fe" 00:11:21.917 ], 00:11:21.917 "product_name": "Malloc disk", 00:11:21.917 "block_size": 512, 00:11:21.917 "num_blocks": 65536, 00:11:21.917 "uuid": "75a9f05e-58ed-41c9-a902-c045f6f7d6fe", 00:11:21.917 "assigned_rate_limits": { 00:11:21.917 "rw_ios_per_sec": 0, 00:11:21.917 "rw_mbytes_per_sec": 0, 00:11:21.917 "r_mbytes_per_sec": 0, 00:11:21.917 "w_mbytes_per_sec": 0 00:11:21.917 }, 00:11:21.917 "claimed": true, 00:11:21.917 "claim_type": "exclusive_write", 00:11:21.917 "zoned": false, 00:11:21.917 "supported_io_types": { 00:11:21.917 "read": true, 00:11:21.917 "write": true, 00:11:21.917 "unmap": true, 00:11:21.917 "flush": true, 00:11:21.917 "reset": true, 00:11:21.917 "nvme_admin": false, 00:11:21.917 "nvme_io": false, 00:11:21.917 "nvme_io_md": false, 00:11:21.917 "write_zeroes": true, 00:11:21.917 "zcopy": true, 00:11:21.917 "get_zone_info": false, 00:11:21.917 "zone_management": false, 00:11:21.917 "zone_append": false, 00:11:21.917 "compare": false, 00:11:21.917 "compare_and_write": false, 00:11:21.917 "abort": true, 00:11:21.917 "seek_hole": false, 00:11:21.917 "seek_data": false, 00:11:21.917 "copy": true, 00:11:21.917 "nvme_iov_md": false 00:11:21.917 }, 00:11:21.917 "memory_domains": [ 00:11:21.917 { 00:11:21.917 "dma_device_id": "system", 00:11:21.917 "dma_device_type": 1 00:11:21.917 }, 00:11:21.917 { 00:11:21.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.917 "dma_device_type": 2 00:11:21.917 } 00:11:21.917 ], 00:11:21.917 "driver_specific": {} 00:11:21.917 }' 00:11:21.917 11:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.917 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:22.177 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:22.437 [2024-07-25 11:53:08.518860] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.437 [2024-07-25 11:53:08.518886] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.437 [2024-07-25 11:53:08.518923] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.437 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.696 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.696 "name": "Existed_Raid", 00:11:22.696 "uuid": "6cd3b9b5-9720-4d66-afd8-d754e5ad9487", 00:11:22.696 "strip_size_kb": 64, 00:11:22.696 "state": "offline", 00:11:22.696 "raid_level": "concat", 00:11:22.696 "superblock": false, 00:11:22.697 "num_base_bdevs": 2, 00:11:22.697 "num_base_bdevs_discovered": 1, 00:11:22.697 "num_base_bdevs_operational": 1, 00:11:22.697 "base_bdevs_list": [ 00:11:22.697 { 00:11:22.697 "name": null, 00:11:22.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.697 "is_configured": false, 00:11:22.697 "data_offset": 0, 00:11:22.697 "data_size": 65536 00:11:22.697 }, 00:11:22.697 { 00:11:22.697 "name": "BaseBdev2", 00:11:22.697 "uuid": "75a9f05e-58ed-41c9-a902-c045f6f7d6fe", 00:11:22.697 "is_configured": true, 00:11:22.697 "data_offset": 0, 00:11:22.697 "data_size": 65536 00:11:22.697 } 00:11:22.697 ] 00:11:22.697 }' 00:11:22.697 11:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.697 11:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.266 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:23.266 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:23.266 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.266 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:23.525 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:23.525 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.525 11:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:24.094 [2024-07-25 11:53:10.031966] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.094 [2024-07-25 11:53:10.032017] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2086600 name Existed_Raid, state offline 00:11:24.094 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:24.094 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:24.094 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.094 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4100240 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4100240 ']' 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4100240 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4100240 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4100240' 00:11:24.354 killing process with pid 4100240 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4100240 00:11:24.354 [2024-07-25 11:53:10.346381] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.354 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4100240 00:11:24.354 [2024-07-25 11:53:10.347241] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:24.623 00:11:24.623 real 0m10.172s 00:11:24.623 user 0m18.163s 00:11:24.623 sys 0m1.822s 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.623 ************************************ 00:11:24.623 END TEST raid_state_function_test 00:11:24.623 ************************************ 00:11:24.623 11:53:10 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:24.623 11:53:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:24.623 11:53:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.623 11:53:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.623 ************************************ 00:11:24.623 START TEST raid_state_function_test_sb 00:11:24.623 ************************************ 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:24.623 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4102304 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4102304' 00:11:24.624 Process raid pid: 4102304 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4102304 /var/tmp/spdk-raid.sock 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4102304 ']' 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:24.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.624 11:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:24.624 [2024-07-25 11:53:10.677599] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:24.624 [2024-07-25 11:53:10.677655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:24.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:24.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:24.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.883 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:24.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:24.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:24.884 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:24.884 [2024-07-25 11:53:10.810452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.884 [2024-07-25 11:53:10.896238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.884 [2024-07-25 11:53:10.951816] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.884 [2024-07-25 11:53:10.951843] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.452 11:53:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.452 11:53:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:25.452 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:25.711 [2024-07-25 11:53:11.700699] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.711 [2024-07-25 11:53:11.700735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.711 [2024-07-25 11:53:11.700745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.711 [2024-07-25 11:53:11.700755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.711 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.970 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.970 "name": "Existed_Raid", 00:11:25.970 "uuid": "e276117c-feda-4644-ac30-47ae82a47e38", 00:11:25.970 "strip_size_kb": 64, 00:11:25.970 "state": "configuring", 00:11:25.970 "raid_level": "concat", 00:11:25.970 "superblock": true, 00:11:25.970 "num_base_bdevs": 2, 00:11:25.970 "num_base_bdevs_discovered": 0, 00:11:25.970 "num_base_bdevs_operational": 2, 00:11:25.970 "base_bdevs_list": [ 00:11:25.970 { 00:11:25.970 "name": "BaseBdev1", 00:11:25.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.970 "is_configured": false, 00:11:25.970 "data_offset": 0, 00:11:25.970 "data_size": 0 00:11:25.970 }, 00:11:25.970 { 00:11:25.970 "name": "BaseBdev2", 00:11:25.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.970 "is_configured": false, 00:11:25.970 "data_offset": 0, 00:11:25.970 "data_size": 0 00:11:25.970 } 00:11:25.970 ] 00:11:25.970 }' 00:11:25.970 11:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.970 11:53:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.540 11:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:26.799 [2024-07-25 11:53:12.715258] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.799 [2024-07-25 11:53:12.715286] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bfcf20 name Existed_Raid, state configuring 00:11:26.799 11:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:27.058 [2024-07-25 11:53:12.943868] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.058 [2024-07-25 11:53:12.943899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.058 [2024-07-25 11:53:12.943908] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.058 [2024-07-25 11:53:12.943919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.058 11:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.318 [2024-07-25 11:53:13.181863] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.318 BaseBdev1 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:27.318 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.599 [ 00:11:27.599 { 00:11:27.599 "name": "BaseBdev1", 00:11:27.599 "aliases": [ 00:11:27.599 "46e09889-401e-4082-a0a2-aa692d8775d3" 00:11:27.599 ], 00:11:27.599 "product_name": "Malloc disk", 00:11:27.599 "block_size": 512, 00:11:27.599 "num_blocks": 65536, 00:11:27.599 "uuid": "46e09889-401e-4082-a0a2-aa692d8775d3", 00:11:27.599 "assigned_rate_limits": { 00:11:27.599 "rw_ios_per_sec": 0, 00:11:27.599 "rw_mbytes_per_sec": 0, 00:11:27.599 "r_mbytes_per_sec": 0, 00:11:27.599 "w_mbytes_per_sec": 0 00:11:27.599 }, 00:11:27.599 "claimed": true, 00:11:27.599 "claim_type": "exclusive_write", 00:11:27.599 "zoned": false, 00:11:27.599 "supported_io_types": { 00:11:27.599 "read": true, 00:11:27.599 "write": true, 00:11:27.599 "unmap": true, 00:11:27.599 "flush": true, 00:11:27.599 "reset": true, 00:11:27.599 "nvme_admin": false, 00:11:27.599 "nvme_io": false, 00:11:27.599 "nvme_io_md": false, 00:11:27.599 "write_zeroes": true, 00:11:27.599 "zcopy": true, 00:11:27.599 "get_zone_info": false, 00:11:27.599 "zone_management": false, 00:11:27.599 "zone_append": false, 00:11:27.599 "compare": false, 00:11:27.599 "compare_and_write": false, 00:11:27.599 "abort": true, 00:11:27.599 "seek_hole": false, 00:11:27.599 "seek_data": false, 00:11:27.599 "copy": true, 00:11:27.599 "nvme_iov_md": false 00:11:27.599 }, 00:11:27.599 "memory_domains": [ 00:11:27.599 { 00:11:27.599 "dma_device_id": "system", 00:11:27.599 "dma_device_type": 1 00:11:27.599 }, 00:11:27.599 { 00:11:27.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.599 "dma_device_type": 2 00:11:27.599 } 00:11:27.599 ], 00:11:27.599 "driver_specific": {} 00:11:27.599 } 00:11:27.599 ] 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.599 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.896 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.896 "name": "Existed_Raid", 00:11:27.896 "uuid": "8e5fa429-831e-4d62-bcc3-f8361ea8db95", 00:11:27.896 "strip_size_kb": 64, 00:11:27.896 "state": "configuring", 00:11:27.896 "raid_level": "concat", 00:11:27.896 "superblock": true, 00:11:27.896 "num_base_bdevs": 2, 00:11:27.896 "num_base_bdevs_discovered": 1, 00:11:27.896 "num_base_bdevs_operational": 2, 00:11:27.896 "base_bdevs_list": [ 00:11:27.896 { 00:11:27.896 "name": "BaseBdev1", 00:11:27.896 "uuid": "46e09889-401e-4082-a0a2-aa692d8775d3", 00:11:27.896 "is_configured": true, 00:11:27.896 "data_offset": 2048, 00:11:27.896 "data_size": 63488 00:11:27.896 }, 00:11:27.896 { 00:11:27.896 "name": "BaseBdev2", 00:11:27.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.896 "is_configured": false, 00:11:27.896 "data_offset": 0, 00:11:27.896 "data_size": 0 00:11:27.896 } 00:11:27.896 ] 00:11:27.896 }' 00:11:27.896 11:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.896 11:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.463 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:28.721 [2024-07-25 11:53:14.661863] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.721 [2024-07-25 11:53:14.661900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bfc810 name Existed_Raid, state configuring 00:11:28.721 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:28.980 [2024-07-25 11:53:14.886490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.980 [2024-07-25 11:53:14.887886] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.980 [2024-07-25 11:53:14.887917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.980 11:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.238 11:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.238 "name": "Existed_Raid", 00:11:29.238 "uuid": "489a7145-be48-44cf-8357-0a4182c1423c", 00:11:29.238 "strip_size_kb": 64, 00:11:29.238 "state": "configuring", 00:11:29.238 "raid_level": "concat", 00:11:29.238 "superblock": true, 00:11:29.238 "num_base_bdevs": 2, 00:11:29.238 "num_base_bdevs_discovered": 1, 00:11:29.238 "num_base_bdevs_operational": 2, 00:11:29.238 "base_bdevs_list": [ 00:11:29.238 { 00:11:29.238 "name": "BaseBdev1", 00:11:29.238 "uuid": "46e09889-401e-4082-a0a2-aa692d8775d3", 00:11:29.238 "is_configured": true, 00:11:29.238 "data_offset": 2048, 00:11:29.238 "data_size": 63488 00:11:29.238 }, 00:11:29.238 { 00:11:29.238 "name": "BaseBdev2", 00:11:29.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.238 "is_configured": false, 00:11:29.238 "data_offset": 0, 00:11:29.238 "data_size": 0 00:11:29.238 } 00:11:29.238 ] 00:11:29.238 }' 00:11:29.238 11:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.238 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.804 11:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.804 [2024-07-25 11:53:15.900260] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.805 [2024-07-25 11:53:15.900397] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bfd600 00:11:29.805 [2024-07-25 11:53:15.900410] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:29.805 [2024-07-25 11:53:15.900569] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bfe840 00:11:29.805 [2024-07-25 11:53:15.900676] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bfd600 00:11:29.805 [2024-07-25 11:53:15.900686] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1bfd600 00:11:29.805 [2024-07-25 11:53:15.900770] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.805 BaseBdev2 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.805 11:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.062 11:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.320 [ 00:11:30.320 { 00:11:30.320 "name": "BaseBdev2", 00:11:30.320 "aliases": [ 00:11:30.320 "7eb8439e-0535-469c-a058-6d16296a13a1" 00:11:30.320 ], 00:11:30.320 "product_name": "Malloc disk", 00:11:30.320 "block_size": 512, 00:11:30.320 "num_blocks": 65536, 00:11:30.320 "uuid": "7eb8439e-0535-469c-a058-6d16296a13a1", 00:11:30.320 "assigned_rate_limits": { 00:11:30.320 "rw_ios_per_sec": 0, 00:11:30.320 "rw_mbytes_per_sec": 0, 00:11:30.320 "r_mbytes_per_sec": 0, 00:11:30.320 "w_mbytes_per_sec": 0 00:11:30.320 }, 00:11:30.320 "claimed": true, 00:11:30.320 "claim_type": "exclusive_write", 00:11:30.320 "zoned": false, 00:11:30.320 "supported_io_types": { 00:11:30.320 "read": true, 00:11:30.320 "write": true, 00:11:30.320 "unmap": true, 00:11:30.320 "flush": true, 00:11:30.320 "reset": true, 00:11:30.320 "nvme_admin": false, 00:11:30.320 "nvme_io": false, 00:11:30.320 "nvme_io_md": false, 00:11:30.320 "write_zeroes": true, 00:11:30.320 "zcopy": true, 00:11:30.320 "get_zone_info": false, 00:11:30.320 "zone_management": false, 00:11:30.320 "zone_append": false, 00:11:30.320 "compare": false, 00:11:30.320 "compare_and_write": false, 00:11:30.320 "abort": true, 00:11:30.320 "seek_hole": false, 00:11:30.320 "seek_data": false, 00:11:30.320 "copy": true, 00:11:30.320 "nvme_iov_md": false 00:11:30.320 }, 00:11:30.320 "memory_domains": [ 00:11:30.320 { 00:11:30.320 "dma_device_id": "system", 00:11:30.320 "dma_device_type": 1 00:11:30.320 }, 00:11:30.320 { 00:11:30.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.320 "dma_device_type": 2 00:11:30.320 } 00:11:30.320 ], 00:11:30.320 "driver_specific": {} 00:11:30.320 } 00:11:30.320 ] 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.320 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.578 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:30.578 "name": "Existed_Raid", 00:11:30.578 "uuid": "489a7145-be48-44cf-8357-0a4182c1423c", 00:11:30.578 "strip_size_kb": 64, 00:11:30.578 "state": "online", 00:11:30.578 "raid_level": "concat", 00:11:30.578 "superblock": true, 00:11:30.578 "num_base_bdevs": 2, 00:11:30.578 "num_base_bdevs_discovered": 2, 00:11:30.578 "num_base_bdevs_operational": 2, 00:11:30.578 "base_bdevs_list": [ 00:11:30.578 { 00:11:30.578 "name": "BaseBdev1", 00:11:30.578 "uuid": "46e09889-401e-4082-a0a2-aa692d8775d3", 00:11:30.578 "is_configured": true, 00:11:30.578 "data_offset": 2048, 00:11:30.578 "data_size": 63488 00:11:30.578 }, 00:11:30.578 { 00:11:30.578 "name": "BaseBdev2", 00:11:30.578 "uuid": "7eb8439e-0535-469c-a058-6d16296a13a1", 00:11:30.578 "is_configured": true, 00:11:30.578 "data_offset": 2048, 00:11:30.578 "data_size": 63488 00:11:30.578 } 00:11:30.578 ] 00:11:30.578 }' 00:11:30.578 11:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:30.578 11:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:31.147 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:31.406 [2024-07-25 11:53:17.384412] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.406 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:31.406 "name": "Existed_Raid", 00:11:31.406 "aliases": [ 00:11:31.406 "489a7145-be48-44cf-8357-0a4182c1423c" 00:11:31.406 ], 00:11:31.406 "product_name": "Raid Volume", 00:11:31.406 "block_size": 512, 00:11:31.406 "num_blocks": 126976, 00:11:31.406 "uuid": "489a7145-be48-44cf-8357-0a4182c1423c", 00:11:31.406 "assigned_rate_limits": { 00:11:31.406 "rw_ios_per_sec": 0, 00:11:31.406 "rw_mbytes_per_sec": 0, 00:11:31.406 "r_mbytes_per_sec": 0, 00:11:31.406 "w_mbytes_per_sec": 0 00:11:31.406 }, 00:11:31.406 "claimed": false, 00:11:31.406 "zoned": false, 00:11:31.406 "supported_io_types": { 00:11:31.406 "read": true, 00:11:31.406 "write": true, 00:11:31.406 "unmap": true, 00:11:31.406 "flush": true, 00:11:31.406 "reset": true, 00:11:31.406 "nvme_admin": false, 00:11:31.406 "nvme_io": false, 00:11:31.406 "nvme_io_md": false, 00:11:31.406 "write_zeroes": true, 00:11:31.406 "zcopy": false, 00:11:31.406 "get_zone_info": false, 00:11:31.406 "zone_management": false, 00:11:31.406 "zone_append": false, 00:11:31.406 "compare": false, 00:11:31.406 "compare_and_write": false, 00:11:31.406 "abort": false, 00:11:31.406 "seek_hole": false, 00:11:31.406 "seek_data": false, 00:11:31.406 "copy": false, 00:11:31.406 "nvme_iov_md": false 00:11:31.406 }, 00:11:31.406 "memory_domains": [ 00:11:31.406 { 00:11:31.406 "dma_device_id": "system", 00:11:31.406 "dma_device_type": 1 00:11:31.406 }, 00:11:31.406 { 00:11:31.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.406 "dma_device_type": 2 00:11:31.406 }, 00:11:31.406 { 00:11:31.406 "dma_device_id": "system", 00:11:31.406 "dma_device_type": 1 00:11:31.406 }, 00:11:31.406 { 00:11:31.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.406 "dma_device_type": 2 00:11:31.406 } 00:11:31.406 ], 00:11:31.406 "driver_specific": { 00:11:31.406 "raid": { 00:11:31.406 "uuid": "489a7145-be48-44cf-8357-0a4182c1423c", 00:11:31.406 "strip_size_kb": 64, 00:11:31.406 "state": "online", 00:11:31.406 "raid_level": "concat", 00:11:31.406 "superblock": true, 00:11:31.406 "num_base_bdevs": 2, 00:11:31.406 "num_base_bdevs_discovered": 2, 00:11:31.406 "num_base_bdevs_operational": 2, 00:11:31.406 "base_bdevs_list": [ 00:11:31.407 { 00:11:31.407 "name": "BaseBdev1", 00:11:31.407 "uuid": "46e09889-401e-4082-a0a2-aa692d8775d3", 00:11:31.407 "is_configured": true, 00:11:31.407 "data_offset": 2048, 00:11:31.407 "data_size": 63488 00:11:31.407 }, 00:11:31.407 { 00:11:31.407 "name": "BaseBdev2", 00:11:31.407 "uuid": "7eb8439e-0535-469c-a058-6d16296a13a1", 00:11:31.407 "is_configured": true, 00:11:31.407 "data_offset": 2048, 00:11:31.407 "data_size": 63488 00:11:31.407 } 00:11:31.407 ] 00:11:31.407 } 00:11:31.407 } 00:11:31.407 }' 00:11:31.407 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.407 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:31.407 BaseBdev2' 00:11:31.407 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:31.407 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:31.407 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:31.666 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:31.666 "name": "BaseBdev1", 00:11:31.666 "aliases": [ 00:11:31.666 "46e09889-401e-4082-a0a2-aa692d8775d3" 00:11:31.666 ], 00:11:31.666 "product_name": "Malloc disk", 00:11:31.666 "block_size": 512, 00:11:31.666 "num_blocks": 65536, 00:11:31.666 "uuid": "46e09889-401e-4082-a0a2-aa692d8775d3", 00:11:31.666 "assigned_rate_limits": { 00:11:31.666 "rw_ios_per_sec": 0, 00:11:31.666 "rw_mbytes_per_sec": 0, 00:11:31.666 "r_mbytes_per_sec": 0, 00:11:31.666 "w_mbytes_per_sec": 0 00:11:31.666 }, 00:11:31.666 "claimed": true, 00:11:31.666 "claim_type": "exclusive_write", 00:11:31.666 "zoned": false, 00:11:31.666 "supported_io_types": { 00:11:31.666 "read": true, 00:11:31.666 "write": true, 00:11:31.666 "unmap": true, 00:11:31.666 "flush": true, 00:11:31.666 "reset": true, 00:11:31.666 "nvme_admin": false, 00:11:31.666 "nvme_io": false, 00:11:31.666 "nvme_io_md": false, 00:11:31.666 "write_zeroes": true, 00:11:31.666 "zcopy": true, 00:11:31.666 "get_zone_info": false, 00:11:31.666 "zone_management": false, 00:11:31.666 "zone_append": false, 00:11:31.666 "compare": false, 00:11:31.666 "compare_and_write": false, 00:11:31.666 "abort": true, 00:11:31.666 "seek_hole": false, 00:11:31.666 "seek_data": false, 00:11:31.666 "copy": true, 00:11:31.666 "nvme_iov_md": false 00:11:31.666 }, 00:11:31.666 "memory_domains": [ 00:11:31.666 { 00:11:31.666 "dma_device_id": "system", 00:11:31.666 "dma_device_type": 1 00:11:31.666 }, 00:11:31.666 { 00:11:31.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.666 "dma_device_type": 2 00:11:31.666 } 00:11:31.666 ], 00:11:31.666 "driver_specific": {} 00:11:31.666 }' 00:11:31.666 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.666 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.666 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:31.666 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:31.925 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:31.925 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:31.925 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:31.925 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:31.925 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:31.925 11:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:31.925 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.184 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:32.184 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:32.184 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:32.184 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:32.184 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:32.184 "name": "BaseBdev2", 00:11:32.184 "aliases": [ 00:11:32.184 "7eb8439e-0535-469c-a058-6d16296a13a1" 00:11:32.184 ], 00:11:32.184 "product_name": "Malloc disk", 00:11:32.184 "block_size": 512, 00:11:32.184 "num_blocks": 65536, 00:11:32.184 "uuid": "7eb8439e-0535-469c-a058-6d16296a13a1", 00:11:32.184 "assigned_rate_limits": { 00:11:32.184 "rw_ios_per_sec": 0, 00:11:32.184 "rw_mbytes_per_sec": 0, 00:11:32.184 "r_mbytes_per_sec": 0, 00:11:32.184 "w_mbytes_per_sec": 0 00:11:32.184 }, 00:11:32.184 "claimed": true, 00:11:32.184 "claim_type": "exclusive_write", 00:11:32.184 "zoned": false, 00:11:32.184 "supported_io_types": { 00:11:32.184 "read": true, 00:11:32.184 "write": true, 00:11:32.184 "unmap": true, 00:11:32.184 "flush": true, 00:11:32.184 "reset": true, 00:11:32.184 "nvme_admin": false, 00:11:32.184 "nvme_io": false, 00:11:32.184 "nvme_io_md": false, 00:11:32.184 "write_zeroes": true, 00:11:32.184 "zcopy": true, 00:11:32.184 "get_zone_info": false, 00:11:32.184 "zone_management": false, 00:11:32.184 "zone_append": false, 00:11:32.184 "compare": false, 00:11:32.184 "compare_and_write": false, 00:11:32.184 "abort": true, 00:11:32.184 "seek_hole": false, 00:11:32.184 "seek_data": false, 00:11:32.184 "copy": true, 00:11:32.184 "nvme_iov_md": false 00:11:32.184 }, 00:11:32.184 "memory_domains": [ 00:11:32.184 { 00:11:32.184 "dma_device_id": "system", 00:11:32.184 "dma_device_type": 1 00:11:32.184 }, 00:11:32.184 { 00:11:32.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.184 "dma_device_type": 2 00:11:32.184 } 00:11:32.184 ], 00:11:32.184 "driver_specific": {} 00:11:32.184 }' 00:11:32.184 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:32.443 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.702 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.702 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:32.702 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:32.961 [2024-07-25 11:53:18.823997] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.961 [2024-07-25 11:53:18.824021] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.961 [2024-07-25 11:53:18.824059] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.961 11:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.961 11:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.961 "name": "Existed_Raid", 00:11:32.961 "uuid": "489a7145-be48-44cf-8357-0a4182c1423c", 00:11:32.961 "strip_size_kb": 64, 00:11:32.961 "state": "offline", 00:11:32.961 "raid_level": "concat", 00:11:32.961 "superblock": true, 00:11:32.961 "num_base_bdevs": 2, 00:11:32.961 "num_base_bdevs_discovered": 1, 00:11:32.961 "num_base_bdevs_operational": 1, 00:11:32.961 "base_bdevs_list": [ 00:11:32.961 { 00:11:32.961 "name": null, 00:11:32.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.961 "is_configured": false, 00:11:32.961 "data_offset": 2048, 00:11:32.961 "data_size": 63488 00:11:32.961 }, 00:11:32.961 { 00:11:32.961 "name": "BaseBdev2", 00:11:32.961 "uuid": "7eb8439e-0535-469c-a058-6d16296a13a1", 00:11:32.961 "is_configured": true, 00:11:32.961 "data_offset": 2048, 00:11:32.961 "data_size": 63488 00:11:32.961 } 00:11:32.961 ] 00:11:32.961 }' 00:11:32.962 11:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.962 11:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.529 11:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:33.529 11:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:33.788 11:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.788 11:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:34.103 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:34.103 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.103 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:34.362 [2024-07-25 11:53:20.369019] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:34.362 [2024-07-25 11:53:20.369062] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bfd600 name Existed_Raid, state offline 00:11:34.362 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:34.362 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:34.362 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.362 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4102304 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4102304 ']' 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4102304 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4102304 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4102304' 00:11:34.621 killing process with pid 4102304 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4102304 00:11:34.621 [2024-07-25 11:53:20.683552] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.621 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4102304 00:11:34.621 [2024-07-25 11:53:20.684390] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.879 11:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:34.879 00:11:34.879 real 0m10.258s 00:11:34.879 user 0m18.237s 00:11:34.879 sys 0m1.956s 00:11:34.879 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.879 11:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.879 ************************************ 00:11:34.879 END TEST raid_state_function_test_sb 00:11:34.879 ************************************ 00:11:34.879 11:53:20 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:34.879 11:53:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:34.879 11:53:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.879 11:53:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.879 ************************************ 00:11:34.879 START TEST raid_superblock_test 00:11:34.879 ************************************ 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:34.879 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4104264 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4104264 /var/tmp/spdk-raid.sock 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4104264 ']' 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:34.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.880 11:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.139 [2024-07-25 11:53:21.013871] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:35.139 [2024-07-25 11:53:21.013928] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104264 ] 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.139 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:35.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:35.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.140 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:35.140 [2024-07-25 11:53:21.146729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.140 [2024-07-25 11:53:21.232744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.399 [2024-07-25 11:53:21.286989] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.399 [2024-07-25 11:53:21.287020] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.966 11:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:36.225 malloc1 00:11:36.225 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.225 [2024-07-25 11:53:22.338692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.225 [2024-07-25 11:53:22.338733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.225 [2024-07-25 11:53:22.338752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba02f0 00:11:36.225 [2024-07-25 11:53:22.338764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.225 [2024-07-25 11:53:22.340247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.225 [2024-07-25 11:53:22.340274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.225 pt1 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:36.484 malloc2 00:11:36.484 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.743 [2024-07-25 11:53:22.780220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.743 [2024-07-25 11:53:22.780263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.743 [2024-07-25 11:53:22.780278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba16d0 00:11:36.743 [2024-07-25 11:53:22.780289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.743 [2024-07-25 11:53:22.781717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.743 [2024-07-25 11:53:22.781743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.743 pt2 00:11:36.743 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:36.743 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:36.743 11:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:11:37.002 [2024-07-25 11:53:23.008855] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:37.002 [2024-07-25 11:53:23.010016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.002 [2024-07-25 11:53:23.010154] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d3a310 00:11:37.002 [2024-07-25 11:53:23.010167] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:37.002 [2024-07-25 11:53:23.010343] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d39ce0 00:11:37.002 [2024-07-25 11:53:23.010471] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d3a310 00:11:37.002 [2024-07-25 11:53:23.010480] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d3a310 00:11:37.002 [2024-07-25 11:53:23.010568] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.002 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.261 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.262 "name": "raid_bdev1", 00:11:37.262 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:37.262 "strip_size_kb": 64, 00:11:37.262 "state": "online", 00:11:37.262 "raid_level": "concat", 00:11:37.262 "superblock": true, 00:11:37.262 "num_base_bdevs": 2, 00:11:37.262 "num_base_bdevs_discovered": 2, 00:11:37.262 "num_base_bdevs_operational": 2, 00:11:37.262 "base_bdevs_list": [ 00:11:37.262 { 00:11:37.262 "name": "pt1", 00:11:37.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.262 "is_configured": true, 00:11:37.262 "data_offset": 2048, 00:11:37.262 "data_size": 63488 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "name": "pt2", 00:11:37.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.262 "is_configured": true, 00:11:37.262 "data_offset": 2048, 00:11:37.262 "data_size": 63488 00:11:37.262 } 00:11:37.262 ] 00:11:37.262 }' 00:11:37.262 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.262 11:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:37.828 11:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:38.087 [2024-07-25 11:53:24.039748] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.087 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:38.087 "name": "raid_bdev1", 00:11:38.087 "aliases": [ 00:11:38.087 "499118d8-16bc-4c33-a8c9-4d7f323bb9ba" 00:11:38.087 ], 00:11:38.087 "product_name": "Raid Volume", 00:11:38.087 "block_size": 512, 00:11:38.087 "num_blocks": 126976, 00:11:38.087 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:38.087 "assigned_rate_limits": { 00:11:38.087 "rw_ios_per_sec": 0, 00:11:38.087 "rw_mbytes_per_sec": 0, 00:11:38.087 "r_mbytes_per_sec": 0, 00:11:38.087 "w_mbytes_per_sec": 0 00:11:38.087 }, 00:11:38.087 "claimed": false, 00:11:38.087 "zoned": false, 00:11:38.087 "supported_io_types": { 00:11:38.087 "read": true, 00:11:38.087 "write": true, 00:11:38.087 "unmap": true, 00:11:38.087 "flush": true, 00:11:38.087 "reset": true, 00:11:38.087 "nvme_admin": false, 00:11:38.087 "nvme_io": false, 00:11:38.087 "nvme_io_md": false, 00:11:38.087 "write_zeroes": true, 00:11:38.087 "zcopy": false, 00:11:38.087 "get_zone_info": false, 00:11:38.087 "zone_management": false, 00:11:38.087 "zone_append": false, 00:11:38.087 "compare": false, 00:11:38.087 "compare_and_write": false, 00:11:38.087 "abort": false, 00:11:38.087 "seek_hole": false, 00:11:38.087 "seek_data": false, 00:11:38.087 "copy": false, 00:11:38.087 "nvme_iov_md": false 00:11:38.087 }, 00:11:38.087 "memory_domains": [ 00:11:38.087 { 00:11:38.087 "dma_device_id": "system", 00:11:38.087 "dma_device_type": 1 00:11:38.087 }, 00:11:38.087 { 00:11:38.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.087 "dma_device_type": 2 00:11:38.087 }, 00:11:38.087 { 00:11:38.087 "dma_device_id": "system", 00:11:38.087 "dma_device_type": 1 00:11:38.087 }, 00:11:38.087 { 00:11:38.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.087 "dma_device_type": 2 00:11:38.087 } 00:11:38.087 ], 00:11:38.087 "driver_specific": { 00:11:38.087 "raid": { 00:11:38.087 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:38.087 "strip_size_kb": 64, 00:11:38.087 "state": "online", 00:11:38.087 "raid_level": "concat", 00:11:38.087 "superblock": true, 00:11:38.087 "num_base_bdevs": 2, 00:11:38.087 "num_base_bdevs_discovered": 2, 00:11:38.087 "num_base_bdevs_operational": 2, 00:11:38.087 "base_bdevs_list": [ 00:11:38.087 { 00:11:38.087 "name": "pt1", 00:11:38.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.087 "is_configured": true, 00:11:38.087 "data_offset": 2048, 00:11:38.087 "data_size": 63488 00:11:38.087 }, 00:11:38.087 { 00:11:38.087 "name": "pt2", 00:11:38.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.087 "is_configured": true, 00:11:38.087 "data_offset": 2048, 00:11:38.087 "data_size": 63488 00:11:38.087 } 00:11:38.087 ] 00:11:38.087 } 00:11:38.087 } 00:11:38.087 }' 00:11:38.087 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.087 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:38.087 pt2' 00:11:38.087 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.087 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:38.087 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:38.346 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:38.346 "name": "pt1", 00:11:38.346 "aliases": [ 00:11:38.346 "00000000-0000-0000-0000-000000000001" 00:11:38.346 ], 00:11:38.346 "product_name": "passthru", 00:11:38.346 "block_size": 512, 00:11:38.346 "num_blocks": 65536, 00:11:38.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.346 "assigned_rate_limits": { 00:11:38.346 "rw_ios_per_sec": 0, 00:11:38.346 "rw_mbytes_per_sec": 0, 00:11:38.346 "r_mbytes_per_sec": 0, 00:11:38.346 "w_mbytes_per_sec": 0 00:11:38.346 }, 00:11:38.346 "claimed": true, 00:11:38.346 "claim_type": "exclusive_write", 00:11:38.346 "zoned": false, 00:11:38.346 "supported_io_types": { 00:11:38.346 "read": true, 00:11:38.346 "write": true, 00:11:38.346 "unmap": true, 00:11:38.346 "flush": true, 00:11:38.346 "reset": true, 00:11:38.346 "nvme_admin": false, 00:11:38.346 "nvme_io": false, 00:11:38.346 "nvme_io_md": false, 00:11:38.346 "write_zeroes": true, 00:11:38.346 "zcopy": true, 00:11:38.346 "get_zone_info": false, 00:11:38.346 "zone_management": false, 00:11:38.346 "zone_append": false, 00:11:38.346 "compare": false, 00:11:38.346 "compare_and_write": false, 00:11:38.346 "abort": true, 00:11:38.346 "seek_hole": false, 00:11:38.346 "seek_data": false, 00:11:38.346 "copy": true, 00:11:38.346 "nvme_iov_md": false 00:11:38.346 }, 00:11:38.346 "memory_domains": [ 00:11:38.346 { 00:11:38.346 "dma_device_id": "system", 00:11:38.346 "dma_device_type": 1 00:11:38.346 }, 00:11:38.346 { 00:11:38.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.346 "dma_device_type": 2 00:11:38.346 } 00:11:38.346 ], 00:11:38.346 "driver_specific": { 00:11:38.346 "passthru": { 00:11:38.346 "name": "pt1", 00:11:38.346 "base_bdev_name": "malloc1" 00:11:38.346 } 00:11:38.346 } 00:11:38.346 }' 00:11:38.346 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.346 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.346 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:38.346 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:38.605 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:38.863 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:38.863 "name": "pt2", 00:11:38.863 "aliases": [ 00:11:38.863 "00000000-0000-0000-0000-000000000002" 00:11:38.863 ], 00:11:38.863 "product_name": "passthru", 00:11:38.863 "block_size": 512, 00:11:38.863 "num_blocks": 65536, 00:11:38.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.863 "assigned_rate_limits": { 00:11:38.863 "rw_ios_per_sec": 0, 00:11:38.863 "rw_mbytes_per_sec": 0, 00:11:38.863 "r_mbytes_per_sec": 0, 00:11:38.863 "w_mbytes_per_sec": 0 00:11:38.863 }, 00:11:38.863 "claimed": true, 00:11:38.863 "claim_type": "exclusive_write", 00:11:38.863 "zoned": false, 00:11:38.863 "supported_io_types": { 00:11:38.863 "read": true, 00:11:38.863 "write": true, 00:11:38.863 "unmap": true, 00:11:38.863 "flush": true, 00:11:38.863 "reset": true, 00:11:38.864 "nvme_admin": false, 00:11:38.864 "nvme_io": false, 00:11:38.864 "nvme_io_md": false, 00:11:38.864 "write_zeroes": true, 00:11:38.864 "zcopy": true, 00:11:38.864 "get_zone_info": false, 00:11:38.864 "zone_management": false, 00:11:38.864 "zone_append": false, 00:11:38.864 "compare": false, 00:11:38.864 "compare_and_write": false, 00:11:38.864 "abort": true, 00:11:38.864 "seek_hole": false, 00:11:38.864 "seek_data": false, 00:11:38.864 "copy": true, 00:11:38.864 "nvme_iov_md": false 00:11:38.864 }, 00:11:38.864 "memory_domains": [ 00:11:38.864 { 00:11:38.864 "dma_device_id": "system", 00:11:38.864 "dma_device_type": 1 00:11:38.864 }, 00:11:38.864 { 00:11:38.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.864 "dma_device_type": 2 00:11:38.864 } 00:11:38.864 ], 00:11:38.864 "driver_specific": { 00:11:38.864 "passthru": { 00:11:38.864 "name": "pt2", 00:11:38.864 "base_bdev_name": "malloc2" 00:11:38.864 } 00:11:38.864 } 00:11:38.864 }' 00:11:38.864 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.864 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.864 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:38.864 11:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.122 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.381 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:39.381 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:39.381 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:39.381 [2024-07-25 11:53:25.459490] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.381 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=499118d8-16bc-4c33-a8c9-4d7f323bb9ba 00:11:39.381 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 499118d8-16bc-4c33-a8c9-4d7f323bb9ba ']' 00:11:39.381 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:39.640 [2024-07-25 11:53:25.683856] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.640 [2024-07-25 11:53:25.683873] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.640 [2024-07-25 11:53:25.683921] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.640 [2024-07-25 11:53:25.683960] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.640 [2024-07-25 11:53:25.683971] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d3a310 name raid_bdev1, state offline 00:11:39.640 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.640 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:39.898 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:39.898 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:39.898 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.898 11:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:40.156 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.156 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:40.415 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:40.415 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:40.741 [2024-07-25 11:53:26.834847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:40.741 [2024-07-25 11:53:26.836077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:40.741 [2024-07-25 11:53:26.836125] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:40.741 [2024-07-25 11:53:26.836168] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:40.741 [2024-07-25 11:53:26.836186] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.741 [2024-07-25 11:53:26.836195] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d433f0 name raid_bdev1, state configuring 00:11:40.741 request: 00:11:40.741 { 00:11:40.741 "name": "raid_bdev1", 00:11:40.741 "raid_level": "concat", 00:11:40.741 "base_bdevs": [ 00:11:40.741 "malloc1", 00:11:40.741 "malloc2" 00:11:40.741 ], 00:11:40.741 "strip_size_kb": 64, 00:11:40.741 "superblock": false, 00:11:40.741 "method": "bdev_raid_create", 00:11:40.741 "req_id": 1 00:11:40.741 } 00:11:40.741 Got JSON-RPC error response 00:11:40.741 response: 00:11:40.741 { 00:11:40.741 "code": -17, 00:11:40.741 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:40.741 } 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.741 11:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:41.309 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:41.309 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:41.309 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:41.568 [2024-07-25 11:53:27.580726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:41.568 [2024-07-25 11:53:27.580771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.568 [2024-07-25 11:53:27.580791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d43d70 00:11:41.568 [2024-07-25 11:53:27.580803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.568 [2024-07-25 11:53:27.582300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.568 [2024-07-25 11:53:27.582325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:41.568 [2024-07-25 11:53:27.582385] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:41.568 [2024-07-25 11:53:27.582410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:41.568 pt1 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.568 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.827 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.827 "name": "raid_bdev1", 00:11:41.827 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:41.827 "strip_size_kb": 64, 00:11:41.827 "state": "configuring", 00:11:41.827 "raid_level": "concat", 00:11:41.827 "superblock": true, 00:11:41.827 "num_base_bdevs": 2, 00:11:41.827 "num_base_bdevs_discovered": 1, 00:11:41.827 "num_base_bdevs_operational": 2, 00:11:41.827 "base_bdevs_list": [ 00:11:41.827 { 00:11:41.827 "name": "pt1", 00:11:41.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.827 "is_configured": true, 00:11:41.827 "data_offset": 2048, 00:11:41.827 "data_size": 63488 00:11:41.827 }, 00:11:41.827 { 00:11:41.827 "name": null, 00:11:41.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.827 "is_configured": false, 00:11:41.827 "data_offset": 2048, 00:11:41.827 "data_size": 63488 00:11:41.827 } 00:11:41.827 ] 00:11:41.827 }' 00:11:41.827 11:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.827 11:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.394 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:42.394 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:42.394 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:42.394 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.652 [2024-07-25 11:53:28.611433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.652 [2024-07-25 11:53:28.611476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.652 [2024-07-25 11:53:28.611493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d3abb0 00:11:42.652 [2024-07-25 11:53:28.611504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.652 [2024-07-25 11:53:28.611809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.652 [2024-07-25 11:53:28.611826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.652 [2024-07-25 11:53:28.611882] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.652 [2024-07-25 11:53:28.611903] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.652 [2024-07-25 11:53:28.611990] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d39120 00:11:42.652 [2024-07-25 11:53:28.612000] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:42.652 [2024-07-25 11:53:28.612162] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b99c20 00:11:42.652 [2024-07-25 11:53:28.612274] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d39120 00:11:42.652 [2024-07-25 11:53:28.612283] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d39120 00:11:42.652 [2024-07-25 11:53:28.612370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.652 pt2 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.652 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.909 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.909 "name": "raid_bdev1", 00:11:42.909 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:42.909 "strip_size_kb": 64, 00:11:42.910 "state": "online", 00:11:42.910 "raid_level": "concat", 00:11:42.910 "superblock": true, 00:11:42.910 "num_base_bdevs": 2, 00:11:42.910 "num_base_bdevs_discovered": 2, 00:11:42.910 "num_base_bdevs_operational": 2, 00:11:42.910 "base_bdevs_list": [ 00:11:42.910 { 00:11:42.910 "name": "pt1", 00:11:42.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.910 "is_configured": true, 00:11:42.910 "data_offset": 2048, 00:11:42.910 "data_size": 63488 00:11:42.910 }, 00:11:42.910 { 00:11:42.910 "name": "pt2", 00:11:42.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.910 "is_configured": true, 00:11:42.910 "data_offset": 2048, 00:11:42.910 "data_size": 63488 00:11:42.910 } 00:11:42.910 ] 00:11:42.910 }' 00:11:42.910 11:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.910 11:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:43.842 [2024-07-25 11:53:29.927135] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.842 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:43.842 "name": "raid_bdev1", 00:11:43.842 "aliases": [ 00:11:43.842 "499118d8-16bc-4c33-a8c9-4d7f323bb9ba" 00:11:43.842 ], 00:11:43.842 "product_name": "Raid Volume", 00:11:43.842 "block_size": 512, 00:11:43.842 "num_blocks": 126976, 00:11:43.842 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:43.842 "assigned_rate_limits": { 00:11:43.842 "rw_ios_per_sec": 0, 00:11:43.842 "rw_mbytes_per_sec": 0, 00:11:43.842 "r_mbytes_per_sec": 0, 00:11:43.842 "w_mbytes_per_sec": 0 00:11:43.842 }, 00:11:43.842 "claimed": false, 00:11:43.842 "zoned": false, 00:11:43.842 "supported_io_types": { 00:11:43.842 "read": true, 00:11:43.842 "write": true, 00:11:43.842 "unmap": true, 00:11:43.842 "flush": true, 00:11:43.842 "reset": true, 00:11:43.842 "nvme_admin": false, 00:11:43.842 "nvme_io": false, 00:11:43.842 "nvme_io_md": false, 00:11:43.842 "write_zeroes": true, 00:11:43.842 "zcopy": false, 00:11:43.842 "get_zone_info": false, 00:11:43.842 "zone_management": false, 00:11:43.842 "zone_append": false, 00:11:43.842 "compare": false, 00:11:43.842 "compare_and_write": false, 00:11:43.842 "abort": false, 00:11:43.842 "seek_hole": false, 00:11:43.842 "seek_data": false, 00:11:43.842 "copy": false, 00:11:43.842 "nvme_iov_md": false 00:11:43.842 }, 00:11:43.842 "memory_domains": [ 00:11:43.842 { 00:11:43.842 "dma_device_id": "system", 00:11:43.842 "dma_device_type": 1 00:11:43.842 }, 00:11:43.842 { 00:11:43.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.842 "dma_device_type": 2 00:11:43.842 }, 00:11:43.842 { 00:11:43.842 "dma_device_id": "system", 00:11:43.842 "dma_device_type": 1 00:11:43.842 }, 00:11:43.842 { 00:11:43.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.843 "dma_device_type": 2 00:11:43.843 } 00:11:43.843 ], 00:11:43.843 "driver_specific": { 00:11:43.843 "raid": { 00:11:43.843 "uuid": "499118d8-16bc-4c33-a8c9-4d7f323bb9ba", 00:11:43.843 "strip_size_kb": 64, 00:11:43.843 "state": "online", 00:11:43.843 "raid_level": "concat", 00:11:43.843 "superblock": true, 00:11:43.843 "num_base_bdevs": 2, 00:11:43.843 "num_base_bdevs_discovered": 2, 00:11:43.843 "num_base_bdevs_operational": 2, 00:11:43.843 "base_bdevs_list": [ 00:11:43.843 { 00:11:43.843 "name": "pt1", 00:11:43.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.843 "is_configured": true, 00:11:43.843 "data_offset": 2048, 00:11:43.843 "data_size": 63488 00:11:43.843 }, 00:11:43.843 { 00:11:43.843 "name": "pt2", 00:11:43.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.843 "is_configured": true, 00:11:43.843 "data_offset": 2048, 00:11:43.843 "data_size": 63488 00:11:43.843 } 00:11:43.843 ] 00:11:43.843 } 00:11:43.843 } 00:11:43.843 }' 00:11:43.843 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.100 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:44.100 pt2' 00:11:44.101 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:44.101 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:44.101 11:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.358 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.358 "name": "pt1", 00:11:44.358 "aliases": [ 00:11:44.358 "00000000-0000-0000-0000-000000000001" 00:11:44.358 ], 00:11:44.358 "product_name": "passthru", 00:11:44.358 "block_size": 512, 00:11:44.358 "num_blocks": 65536, 00:11:44.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.358 "assigned_rate_limits": { 00:11:44.358 "rw_ios_per_sec": 0, 00:11:44.358 "rw_mbytes_per_sec": 0, 00:11:44.358 "r_mbytes_per_sec": 0, 00:11:44.358 "w_mbytes_per_sec": 0 00:11:44.358 }, 00:11:44.358 "claimed": true, 00:11:44.358 "claim_type": "exclusive_write", 00:11:44.358 "zoned": false, 00:11:44.358 "supported_io_types": { 00:11:44.358 "read": true, 00:11:44.358 "write": true, 00:11:44.358 "unmap": true, 00:11:44.358 "flush": true, 00:11:44.358 "reset": true, 00:11:44.358 "nvme_admin": false, 00:11:44.358 "nvme_io": false, 00:11:44.358 "nvme_io_md": false, 00:11:44.358 "write_zeroes": true, 00:11:44.358 "zcopy": true, 00:11:44.358 "get_zone_info": false, 00:11:44.358 "zone_management": false, 00:11:44.358 "zone_append": false, 00:11:44.358 "compare": false, 00:11:44.358 "compare_and_write": false, 00:11:44.358 "abort": true, 00:11:44.358 "seek_hole": false, 00:11:44.358 "seek_data": false, 00:11:44.358 "copy": true, 00:11:44.358 "nvme_iov_md": false 00:11:44.358 }, 00:11:44.358 "memory_domains": [ 00:11:44.358 { 00:11:44.358 "dma_device_id": "system", 00:11:44.358 "dma_device_type": 1 00:11:44.358 }, 00:11:44.358 { 00:11:44.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.359 "dma_device_type": 2 00:11:44.359 } 00:11:44.359 ], 00:11:44.359 "driver_specific": { 00:11:44.359 "passthru": { 00:11:44.359 "name": "pt1", 00:11:44.359 "base_bdev_name": "malloc1" 00:11:44.359 } 00:11:44.359 } 00:11:44.359 }' 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.359 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:44.616 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.875 "name": "pt2", 00:11:44.875 "aliases": [ 00:11:44.875 "00000000-0000-0000-0000-000000000002" 00:11:44.875 ], 00:11:44.875 "product_name": "passthru", 00:11:44.875 "block_size": 512, 00:11:44.875 "num_blocks": 65536, 00:11:44.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.875 "assigned_rate_limits": { 00:11:44.875 "rw_ios_per_sec": 0, 00:11:44.875 "rw_mbytes_per_sec": 0, 00:11:44.875 "r_mbytes_per_sec": 0, 00:11:44.875 "w_mbytes_per_sec": 0 00:11:44.875 }, 00:11:44.875 "claimed": true, 00:11:44.875 "claim_type": "exclusive_write", 00:11:44.875 "zoned": false, 00:11:44.875 "supported_io_types": { 00:11:44.875 "read": true, 00:11:44.875 "write": true, 00:11:44.875 "unmap": true, 00:11:44.875 "flush": true, 00:11:44.875 "reset": true, 00:11:44.875 "nvme_admin": false, 00:11:44.875 "nvme_io": false, 00:11:44.875 "nvme_io_md": false, 00:11:44.875 "write_zeroes": true, 00:11:44.875 "zcopy": true, 00:11:44.875 "get_zone_info": false, 00:11:44.875 "zone_management": false, 00:11:44.875 "zone_append": false, 00:11:44.875 "compare": false, 00:11:44.875 "compare_and_write": false, 00:11:44.875 "abort": true, 00:11:44.875 "seek_hole": false, 00:11:44.875 "seek_data": false, 00:11:44.875 "copy": true, 00:11:44.875 "nvme_iov_md": false 00:11:44.875 }, 00:11:44.875 "memory_domains": [ 00:11:44.875 { 00:11:44.875 "dma_device_id": "system", 00:11:44.875 "dma_device_type": 1 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.875 "dma_device_type": 2 00:11:44.875 } 00:11:44.875 ], 00:11:44.875 "driver_specific": { 00:11:44.875 "passthru": { 00:11:44.875 "name": "pt2", 00:11:44.875 "base_bdev_name": "malloc2" 00:11:44.875 } 00:11:44.875 } 00:11:44.875 }' 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:44.875 11:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:45.133 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:45.390 [2024-07-25 11:53:31.342879] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 499118d8-16bc-4c33-a8c9-4d7f323bb9ba '!=' 499118d8-16bc-4c33-a8c9-4d7f323bb9ba ']' 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4104264 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4104264 ']' 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4104264 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4104264 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4104264' 00:11:45.390 killing process with pid 4104264 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4104264 00:11:45.390 [2024-07-25 11:53:31.422509] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.390 [2024-07-25 11:53:31.422564] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.390 [2024-07-25 11:53:31.422602] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.390 [2024-07-25 11:53:31.422613] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d39120 name raid_bdev1, state offline 00:11:45.390 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4104264 00:11:45.391 [2024-07-25 11:53:31.438536] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.647 11:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:45.647 00:11:45.647 real 0m10.670s 00:11:45.647 user 0m19.168s 00:11:45.647 sys 0m1.852s 00:11:45.647 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.647 11:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.647 ************************************ 00:11:45.647 END TEST raid_superblock_test 00:11:45.647 ************************************ 00:11:45.647 11:53:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:45.647 11:53:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:45.647 11:53:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.647 11:53:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.647 ************************************ 00:11:45.647 START TEST raid_read_error_test 00:11:45.647 ************************************ 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9Q3CyOP8pf 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4106233 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4106233 /var/tmp/spdk-raid.sock 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4106233 ']' 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:45.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.647 11:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.904 [2024-07-25 11:53:31.790205] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:45.904 [2024-07-25 11:53:31.790267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106233 ] 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:45.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.904 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:45.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.905 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:45.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.905 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:45.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.905 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:45.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.905 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:45.905 [2024-07-25 11:53:31.921218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.905 [2024-07-25 11:53:32.007750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.162 [2024-07-25 11:53:32.069291] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.162 [2024-07-25 11:53:32.069335] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.786 11:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.786 11:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:46.786 11:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:46.786 11:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.786 BaseBdev1_malloc 00:11:47.043 11:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:47.043 true 00:11:47.043 11:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:47.300 [2024-07-25 11:53:33.335216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:47.300 [2024-07-25 11:53:33.335254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.300 [2024-07-25 11:53:33.335272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2878190 00:11:47.300 [2024-07-25 11:53:33.335283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.300 [2024-07-25 11:53:33.336825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.300 [2024-07-25 11:53:33.336857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.300 BaseBdev1 00:11:47.300 11:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:47.300 11:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:47.557 BaseBdev2_malloc 00:11:47.557 11:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:47.815 true 00:11:47.815 11:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:48.073 [2024-07-25 11:53:34.025230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:48.073 [2024-07-25 11:53:34.025267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.073 [2024-07-25 11:53:34.025284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x287ce20 00:11:48.073 [2024-07-25 11:53:34.025296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.073 [2024-07-25 11:53:34.026625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.073 [2024-07-25 11:53:34.026651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:48.073 BaseBdev2 00:11:48.073 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:48.331 [2024-07-25 11:53:34.249853] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.331 [2024-07-25 11:53:34.251015] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.331 [2024-07-25 11:53:34.251197] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x287ea50 00:11:48.331 [2024-07-25 11:53:34.251211] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:48.331 [2024-07-25 11:53:34.251383] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x287e2b0 00:11:48.331 [2024-07-25 11:53:34.251520] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x287ea50 00:11:48.331 [2024-07-25 11:53:34.251530] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x287ea50 00:11:48.331 [2024-07-25 11:53:34.251622] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.331 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.589 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.589 "name": "raid_bdev1", 00:11:48.589 "uuid": "7cd8dfb5-5669-4440-a0d7-3da0ef96e609", 00:11:48.589 "strip_size_kb": 64, 00:11:48.589 "state": "online", 00:11:48.589 "raid_level": "concat", 00:11:48.589 "superblock": true, 00:11:48.589 "num_base_bdevs": 2, 00:11:48.589 "num_base_bdevs_discovered": 2, 00:11:48.589 "num_base_bdevs_operational": 2, 00:11:48.589 "base_bdevs_list": [ 00:11:48.589 { 00:11:48.589 "name": "BaseBdev1", 00:11:48.589 "uuid": "f14c10f7-90e8-52c1-9dcb-6dccc4cbab6f", 00:11:48.589 "is_configured": true, 00:11:48.589 "data_offset": 2048, 00:11:48.589 "data_size": 63488 00:11:48.589 }, 00:11:48.589 { 00:11:48.589 "name": "BaseBdev2", 00:11:48.589 "uuid": "bb3b928f-1907-5cdb-bd33-ee9d7d3a8736", 00:11:48.589 "is_configured": true, 00:11:48.589 "data_offset": 2048, 00:11:48.589 "data_size": 63488 00:11:48.589 } 00:11:48.589 ] 00:11:48.589 }' 00:11:48.589 11:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.589 11:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.156 11:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:49.156 11:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:49.156 [2024-07-25 11:53:35.140451] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2879b50 00:11:50.090 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:50.348 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.349 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.349 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.349 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.349 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.349 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.606 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.606 "name": "raid_bdev1", 00:11:50.606 "uuid": "7cd8dfb5-5669-4440-a0d7-3da0ef96e609", 00:11:50.606 "strip_size_kb": 64, 00:11:50.606 "state": "online", 00:11:50.606 "raid_level": "concat", 00:11:50.606 "superblock": true, 00:11:50.606 "num_base_bdevs": 2, 00:11:50.606 "num_base_bdevs_discovered": 2, 00:11:50.606 "num_base_bdevs_operational": 2, 00:11:50.606 "base_bdevs_list": [ 00:11:50.606 { 00:11:50.606 "name": "BaseBdev1", 00:11:50.606 "uuid": "f14c10f7-90e8-52c1-9dcb-6dccc4cbab6f", 00:11:50.606 "is_configured": true, 00:11:50.606 "data_offset": 2048, 00:11:50.606 "data_size": 63488 00:11:50.606 }, 00:11:50.606 { 00:11:50.606 "name": "BaseBdev2", 00:11:50.606 "uuid": "bb3b928f-1907-5cdb-bd33-ee9d7d3a8736", 00:11:50.606 "is_configured": true, 00:11:50.606 "data_offset": 2048, 00:11:50.606 "data_size": 63488 00:11:50.606 } 00:11:50.606 ] 00:11:50.606 }' 00:11:50.606 11:53:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.606 11:53:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.174 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:51.174 [2024-07-25 11:53:37.282745] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.174 [2024-07-25 11:53:37.282783] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.174 [2024-07-25 11:53:37.285688] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.174 [2024-07-25 11:53:37.285718] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.174 [2024-07-25 11:53:37.285744] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.174 [2024-07-25 11:53:37.285754] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x287ea50 name raid_bdev1, state offline 00:11:51.174 0 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4106233 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4106233 ']' 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4106233 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4106233 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4106233' 00:11:51.433 killing process with pid 4106233 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4106233 00:11:51.433 [2024-07-25 11:53:37.361545] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.433 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4106233 00:11:51.433 [2024-07-25 11:53:37.370932] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9Q3CyOP8pf 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:11:51.693 00:11:51.693 real 0m5.858s 00:11:51.693 user 0m9.090s 00:11:51.693 sys 0m1.045s 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.693 11:53:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.693 ************************************ 00:11:51.693 END TEST raid_read_error_test 00:11:51.693 ************************************ 00:11:51.693 11:53:37 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:51.693 11:53:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:51.693 11:53:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.693 11:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.693 ************************************ 00:11:51.693 START TEST raid_write_error_test 00:11:51.693 ************************************ 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.TFz9b7sTQp 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4107363 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4107363 /var/tmp/spdk-raid.sock 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4107363 ']' 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.693 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.693 [2024-07-25 11:53:37.711899] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:51.693 [2024-07-25 11:53:37.711954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107363 ] 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:51.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.693 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:51.953 [2024-07-25 11:53:37.842847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.953 [2024-07-25 11:53:37.928673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.953 [2024-07-25 11:53:37.991305] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.953 [2024-07-25 11:53:37.991347] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.520 11:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.520 11:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:52.520 11:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:52.520 11:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.779 BaseBdev1_malloc 00:11:52.779 11:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:53.037 true 00:11:53.037 11:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:53.295 [2024-07-25 11:53:39.160391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:53.295 [2024-07-25 11:53:39.160430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.295 [2024-07-25 11:53:39.160448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa41190 00:11:53.295 [2024-07-25 11:53:39.160459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.295 [2024-07-25 11:53:39.162027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.295 [2024-07-25 11:53:39.162056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.295 BaseBdev1 00:11:53.295 11:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:53.295 11:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.554 BaseBdev2_malloc 00:11:53.812 11:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:53.812 true 00:11:53.812 11:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:54.379 [2024-07-25 11:53:40.379771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:54.379 [2024-07-25 11:53:40.379813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.379 [2024-07-25 11:53:40.379831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa45e20 00:11:54.379 [2024-07-25 11:53:40.379843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.379 [2024-07-25 11:53:40.381279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.379 [2024-07-25 11:53:40.381305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:54.379 BaseBdev2 00:11:54.379 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:54.638 [2024-07-25 11:53:40.608401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.638 [2024-07-25 11:53:40.609577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.638 [2024-07-25 11:53:40.609751] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xa47a50 00:11:54.638 [2024-07-25 11:53:40.609763] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:54.638 [2024-07-25 11:53:40.609936] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa472b0 00:11:54.638 [2024-07-25 11:53:40.610073] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa47a50 00:11:54.638 [2024-07-25 11:53:40.610082] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa47a50 00:11:54.638 [2024-07-25 11:53:40.610185] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.638 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.897 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.897 "name": "raid_bdev1", 00:11:54.897 "uuid": "4fb91486-7929-4737-a78f-1ecb0feaa649", 00:11:54.897 "strip_size_kb": 64, 00:11:54.897 "state": "online", 00:11:54.897 "raid_level": "concat", 00:11:54.897 "superblock": true, 00:11:54.897 "num_base_bdevs": 2, 00:11:54.897 "num_base_bdevs_discovered": 2, 00:11:54.897 "num_base_bdevs_operational": 2, 00:11:54.897 "base_bdevs_list": [ 00:11:54.897 { 00:11:54.897 "name": "BaseBdev1", 00:11:54.897 "uuid": "f8d6172e-3323-5168-ba8b-da09cc70cc91", 00:11:54.897 "is_configured": true, 00:11:54.897 "data_offset": 2048, 00:11:54.897 "data_size": 63488 00:11:54.897 }, 00:11:54.897 { 00:11:54.897 "name": "BaseBdev2", 00:11:54.897 "uuid": "96f96c46-9cb4-5ba8-a88c-016785a8f806", 00:11:54.897 "is_configured": true, 00:11:54.897 "data_offset": 2048, 00:11:54.897 "data_size": 63488 00:11:54.897 } 00:11:54.897 ] 00:11:54.897 }' 00:11:54.897 11:53:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.897 11:53:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.464 11:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:55.464 11:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:55.464 [2024-07-25 11:53:41.499067] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa42b50 00:11:56.400 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.659 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.918 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.918 "name": "raid_bdev1", 00:11:56.918 "uuid": "4fb91486-7929-4737-a78f-1ecb0feaa649", 00:11:56.918 "strip_size_kb": 64, 00:11:56.918 "state": "online", 00:11:56.918 "raid_level": "concat", 00:11:56.918 "superblock": true, 00:11:56.918 "num_base_bdevs": 2, 00:11:56.918 "num_base_bdevs_discovered": 2, 00:11:56.918 "num_base_bdevs_operational": 2, 00:11:56.918 "base_bdevs_list": [ 00:11:56.918 { 00:11:56.918 "name": "BaseBdev1", 00:11:56.918 "uuid": "f8d6172e-3323-5168-ba8b-da09cc70cc91", 00:11:56.918 "is_configured": true, 00:11:56.918 "data_offset": 2048, 00:11:56.918 "data_size": 63488 00:11:56.918 }, 00:11:56.918 { 00:11:56.918 "name": "BaseBdev2", 00:11:56.918 "uuid": "96f96c46-9cb4-5ba8-a88c-016785a8f806", 00:11:56.918 "is_configured": true, 00:11:56.918 "data_offset": 2048, 00:11:56.918 "data_size": 63488 00:11:56.918 } 00:11:56.918 ] 00:11:56.918 }' 00:11:56.918 11:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.918 11:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.485 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:57.485 [2024-07-25 11:53:43.581223] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.485 [2024-07-25 11:53:43.581264] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.485 [2024-07-25 11:53:43.584178] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.485 [2024-07-25 11:53:43.584208] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.485 [2024-07-25 11:53:43.584234] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.485 [2024-07-25 11:53:43.584244] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa47a50 name raid_bdev1, state offline 00:11:57.485 0 00:11:57.485 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4107363 00:11:57.485 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4107363 ']' 00:11:57.485 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4107363 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4107363 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4107363' 00:11:57.743 killing process with pid 4107363 00:11:57.743 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4107363 00:11:57.744 [2024-07-25 11:53:43.650272] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.744 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4107363 00:11:57.744 [2024-07-25 11:53:43.659807] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.744 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.TFz9b7sTQp 00:11:57.744 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:57.744 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:11:58.003 00:11:58.003 real 0m6.221s 00:11:58.003 user 0m9.838s 00:11:58.003 sys 0m1.015s 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.003 11:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.003 ************************************ 00:11:58.003 END TEST raid_write_error_test 00:11:58.003 ************************************ 00:11:58.003 11:53:43 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:58.003 11:53:43 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:58.003 11:53:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:58.003 11:53:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.003 11:53:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.003 ************************************ 00:11:58.003 START TEST raid_state_function_test 00:11:58.003 ************************************ 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4108513 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4108513' 00:11:58.003 Process raid pid: 4108513 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4108513 /var/tmp/spdk-raid.sock 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4108513 ']' 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:58.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.003 11:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.003 [2024-07-25 11:53:44.009794] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:58.003 [2024-07-25 11:53:44.009848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:58.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.003 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:58.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.004 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:58.263 [2024-07-25 11:53:44.142541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.263 [2024-07-25 11:53:44.224286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.263 [2024-07-25 11:53:44.286030] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.263 [2024-07-25 11:53:44.286065] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.829 11:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.829 11:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:58.829 11:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:59.088 [2024-07-25 11:53:45.112358] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.088 [2024-07-25 11:53:45.112399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.088 [2024-07-25 11:53:45.112409] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.088 [2024-07-25 11:53:45.112420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.088 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.347 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:59.347 "name": "Existed_Raid", 00:11:59.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.347 "strip_size_kb": 0, 00:11:59.347 "state": "configuring", 00:11:59.347 "raid_level": "raid1", 00:11:59.347 "superblock": false, 00:11:59.347 "num_base_bdevs": 2, 00:11:59.347 "num_base_bdevs_discovered": 0, 00:11:59.347 "num_base_bdevs_operational": 2, 00:11:59.347 "base_bdevs_list": [ 00:11:59.347 { 00:11:59.347 "name": "BaseBdev1", 00:11:59.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.347 "is_configured": false, 00:11:59.347 "data_offset": 0, 00:11:59.347 "data_size": 0 00:11:59.347 }, 00:11:59.347 { 00:11:59.347 "name": "BaseBdev2", 00:11:59.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.347 "is_configured": false, 00:11:59.347 "data_offset": 0, 00:11:59.347 "data_size": 0 00:11:59.347 } 00:11:59.347 ] 00:11:59.347 }' 00:11:59.347 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:59.347 11:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.915 11:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:00.174 [2024-07-25 11:53:46.102847] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.174 [2024-07-25 11:53:46.102874] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd22f20 name Existed_Raid, state configuring 00:12:00.174 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:00.433 [2024-07-25 11:53:46.331457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.433 [2024-07-25 11:53:46.331483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.433 [2024-07-25 11:53:46.331492] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.433 [2024-07-25 11:53:46.331503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.433 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.692 [2024-07-25 11:53:46.561446] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.692 BaseBdev1 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.692 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.951 [ 00:12:00.951 { 00:12:00.951 "name": "BaseBdev1", 00:12:00.951 "aliases": [ 00:12:00.951 "67d4734d-1126-49dd-86b7-85f16fe5ed24" 00:12:00.951 ], 00:12:00.951 "product_name": "Malloc disk", 00:12:00.951 "block_size": 512, 00:12:00.951 "num_blocks": 65536, 00:12:00.951 "uuid": "67d4734d-1126-49dd-86b7-85f16fe5ed24", 00:12:00.951 "assigned_rate_limits": { 00:12:00.951 "rw_ios_per_sec": 0, 00:12:00.951 "rw_mbytes_per_sec": 0, 00:12:00.951 "r_mbytes_per_sec": 0, 00:12:00.951 "w_mbytes_per_sec": 0 00:12:00.951 }, 00:12:00.951 "claimed": true, 00:12:00.951 "claim_type": "exclusive_write", 00:12:00.951 "zoned": false, 00:12:00.951 "supported_io_types": { 00:12:00.951 "read": true, 00:12:00.951 "write": true, 00:12:00.951 "unmap": true, 00:12:00.951 "flush": true, 00:12:00.951 "reset": true, 00:12:00.951 "nvme_admin": false, 00:12:00.951 "nvme_io": false, 00:12:00.951 "nvme_io_md": false, 00:12:00.951 "write_zeroes": true, 00:12:00.951 "zcopy": true, 00:12:00.951 "get_zone_info": false, 00:12:00.951 "zone_management": false, 00:12:00.951 "zone_append": false, 00:12:00.951 "compare": false, 00:12:00.951 "compare_and_write": false, 00:12:00.951 "abort": true, 00:12:00.951 "seek_hole": false, 00:12:00.951 "seek_data": false, 00:12:00.951 "copy": true, 00:12:00.951 "nvme_iov_md": false 00:12:00.951 }, 00:12:00.951 "memory_domains": [ 00:12:00.951 { 00:12:00.951 "dma_device_id": "system", 00:12:00.951 "dma_device_type": 1 00:12:00.951 }, 00:12:00.951 { 00:12:00.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.951 "dma_device_type": 2 00:12:00.951 } 00:12:00.951 ], 00:12:00.951 "driver_specific": {} 00:12:00.951 } 00:12:00.951 ] 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.951 11:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.210 11:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.210 "name": "Existed_Raid", 00:12:01.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.210 "strip_size_kb": 0, 00:12:01.210 "state": "configuring", 00:12:01.210 "raid_level": "raid1", 00:12:01.210 "superblock": false, 00:12:01.210 "num_base_bdevs": 2, 00:12:01.210 "num_base_bdevs_discovered": 1, 00:12:01.210 "num_base_bdevs_operational": 2, 00:12:01.210 "base_bdevs_list": [ 00:12:01.210 { 00:12:01.210 "name": "BaseBdev1", 00:12:01.210 "uuid": "67d4734d-1126-49dd-86b7-85f16fe5ed24", 00:12:01.210 "is_configured": true, 00:12:01.211 "data_offset": 0, 00:12:01.211 "data_size": 65536 00:12:01.211 }, 00:12:01.211 { 00:12:01.211 "name": "BaseBdev2", 00:12:01.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.211 "is_configured": false, 00:12:01.211 "data_offset": 0, 00:12:01.211 "data_size": 0 00:12:01.211 } 00:12:01.211 ] 00:12:01.211 }' 00:12:01.211 11:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.211 11:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 11:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:02.037 [2024-07-25 11:53:47.977177] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.038 [2024-07-25 11:53:47.977214] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd22810 name Existed_Raid, state configuring 00:12:02.038 11:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:02.296 [2024-07-25 11:53:48.201790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.296 [2024-07-25 11:53:48.203192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.297 [2024-07-25 11:53:48.203223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.297 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.555 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.555 "name": "Existed_Raid", 00:12:02.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.555 "strip_size_kb": 0, 00:12:02.555 "state": "configuring", 00:12:02.555 "raid_level": "raid1", 00:12:02.555 "superblock": false, 00:12:02.555 "num_base_bdevs": 2, 00:12:02.555 "num_base_bdevs_discovered": 1, 00:12:02.555 "num_base_bdevs_operational": 2, 00:12:02.555 "base_bdevs_list": [ 00:12:02.555 { 00:12:02.555 "name": "BaseBdev1", 00:12:02.555 "uuid": "67d4734d-1126-49dd-86b7-85f16fe5ed24", 00:12:02.555 "is_configured": true, 00:12:02.555 "data_offset": 0, 00:12:02.555 "data_size": 65536 00:12:02.555 }, 00:12:02.555 { 00:12:02.555 "name": "BaseBdev2", 00:12:02.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.555 "is_configured": false, 00:12:02.555 "data_offset": 0, 00:12:02.555 "data_size": 0 00:12:02.555 } 00:12:02.555 ] 00:12:02.555 }' 00:12:02.555 11:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.555 11:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.123 [2024-07-25 11:53:49.219663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.123 [2024-07-25 11:53:49.219698] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd23600 00:12:03.123 [2024-07-25 11:53:49.219705] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.123 [2024-07-25 11:53:49.219883] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd19d80 00:12:03.123 [2024-07-25 11:53:49.219996] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd23600 00:12:03.123 [2024-07-25 11:53:49.220006] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd23600 00:12:03.123 [2024-07-25 11:53:49.220172] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.123 BaseBdev2 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.123 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:03.382 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.643 [ 00:12:03.643 { 00:12:03.643 "name": "BaseBdev2", 00:12:03.643 "aliases": [ 00:12:03.643 "9ffec3d8-f40e-4921-8e5d-51bb4a566a92" 00:12:03.643 ], 00:12:03.643 "product_name": "Malloc disk", 00:12:03.643 "block_size": 512, 00:12:03.643 "num_blocks": 65536, 00:12:03.643 "uuid": "9ffec3d8-f40e-4921-8e5d-51bb4a566a92", 00:12:03.643 "assigned_rate_limits": { 00:12:03.643 "rw_ios_per_sec": 0, 00:12:03.643 "rw_mbytes_per_sec": 0, 00:12:03.643 "r_mbytes_per_sec": 0, 00:12:03.643 "w_mbytes_per_sec": 0 00:12:03.643 }, 00:12:03.643 "claimed": true, 00:12:03.643 "claim_type": "exclusive_write", 00:12:03.643 "zoned": false, 00:12:03.643 "supported_io_types": { 00:12:03.643 "read": true, 00:12:03.643 "write": true, 00:12:03.643 "unmap": true, 00:12:03.643 "flush": true, 00:12:03.643 "reset": true, 00:12:03.643 "nvme_admin": false, 00:12:03.643 "nvme_io": false, 00:12:03.643 "nvme_io_md": false, 00:12:03.643 "write_zeroes": true, 00:12:03.643 "zcopy": true, 00:12:03.643 "get_zone_info": false, 00:12:03.643 "zone_management": false, 00:12:03.643 "zone_append": false, 00:12:03.643 "compare": false, 00:12:03.643 "compare_and_write": false, 00:12:03.643 "abort": true, 00:12:03.643 "seek_hole": false, 00:12:03.643 "seek_data": false, 00:12:03.643 "copy": true, 00:12:03.643 "nvme_iov_md": false 00:12:03.643 }, 00:12:03.643 "memory_domains": [ 00:12:03.643 { 00:12:03.643 "dma_device_id": "system", 00:12:03.643 "dma_device_type": 1 00:12:03.643 }, 00:12:03.643 { 00:12:03.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.643 "dma_device_type": 2 00:12:03.643 } 00:12:03.643 ], 00:12:03.643 "driver_specific": {} 00:12:03.643 } 00:12:03.643 ] 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.643 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.903 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.903 "name": "Existed_Raid", 00:12:03.903 "uuid": "b5f920d7-928f-41c1-8c8e-4c3c86d3fbd5", 00:12:03.903 "strip_size_kb": 0, 00:12:03.903 "state": "online", 00:12:03.903 "raid_level": "raid1", 00:12:03.903 "superblock": false, 00:12:03.903 "num_base_bdevs": 2, 00:12:03.903 "num_base_bdevs_discovered": 2, 00:12:03.903 "num_base_bdevs_operational": 2, 00:12:03.903 "base_bdevs_list": [ 00:12:03.903 { 00:12:03.903 "name": "BaseBdev1", 00:12:03.903 "uuid": "67d4734d-1126-49dd-86b7-85f16fe5ed24", 00:12:03.903 "is_configured": true, 00:12:03.903 "data_offset": 0, 00:12:03.903 "data_size": 65536 00:12:03.903 }, 00:12:03.903 { 00:12:03.903 "name": "BaseBdev2", 00:12:03.903 "uuid": "9ffec3d8-f40e-4921-8e5d-51bb4a566a92", 00:12:03.903 "is_configured": true, 00:12:03.903 "data_offset": 0, 00:12:03.903 "data_size": 65536 00:12:03.903 } 00:12:03.903 ] 00:12:03.903 }' 00:12:03.903 11:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.903 11:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:04.471 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:04.730 [2024-07-25 11:53:50.591531] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.730 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:04.730 "name": "Existed_Raid", 00:12:04.730 "aliases": [ 00:12:04.730 "b5f920d7-928f-41c1-8c8e-4c3c86d3fbd5" 00:12:04.730 ], 00:12:04.730 "product_name": "Raid Volume", 00:12:04.730 "block_size": 512, 00:12:04.730 "num_blocks": 65536, 00:12:04.730 "uuid": "b5f920d7-928f-41c1-8c8e-4c3c86d3fbd5", 00:12:04.730 "assigned_rate_limits": { 00:12:04.730 "rw_ios_per_sec": 0, 00:12:04.730 "rw_mbytes_per_sec": 0, 00:12:04.730 "r_mbytes_per_sec": 0, 00:12:04.730 "w_mbytes_per_sec": 0 00:12:04.730 }, 00:12:04.730 "claimed": false, 00:12:04.730 "zoned": false, 00:12:04.730 "supported_io_types": { 00:12:04.730 "read": true, 00:12:04.730 "write": true, 00:12:04.730 "unmap": false, 00:12:04.730 "flush": false, 00:12:04.730 "reset": true, 00:12:04.730 "nvme_admin": false, 00:12:04.730 "nvme_io": false, 00:12:04.730 "nvme_io_md": false, 00:12:04.730 "write_zeroes": true, 00:12:04.730 "zcopy": false, 00:12:04.730 "get_zone_info": false, 00:12:04.730 "zone_management": false, 00:12:04.730 "zone_append": false, 00:12:04.730 "compare": false, 00:12:04.730 "compare_and_write": false, 00:12:04.730 "abort": false, 00:12:04.730 "seek_hole": false, 00:12:04.730 "seek_data": false, 00:12:04.730 "copy": false, 00:12:04.730 "nvme_iov_md": false 00:12:04.730 }, 00:12:04.730 "memory_domains": [ 00:12:04.730 { 00:12:04.730 "dma_device_id": "system", 00:12:04.730 "dma_device_type": 1 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.730 "dma_device_type": 2 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "dma_device_id": "system", 00:12:04.730 "dma_device_type": 1 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.730 "dma_device_type": 2 00:12:04.730 } 00:12:04.730 ], 00:12:04.730 "driver_specific": { 00:12:04.730 "raid": { 00:12:04.730 "uuid": "b5f920d7-928f-41c1-8c8e-4c3c86d3fbd5", 00:12:04.730 "strip_size_kb": 0, 00:12:04.730 "state": "online", 00:12:04.730 "raid_level": "raid1", 00:12:04.730 "superblock": false, 00:12:04.730 "num_base_bdevs": 2, 00:12:04.730 "num_base_bdevs_discovered": 2, 00:12:04.730 "num_base_bdevs_operational": 2, 00:12:04.730 "base_bdevs_list": [ 00:12:04.730 { 00:12:04.730 "name": "BaseBdev1", 00:12:04.730 "uuid": "67d4734d-1126-49dd-86b7-85f16fe5ed24", 00:12:04.730 "is_configured": true, 00:12:04.730 "data_offset": 0, 00:12:04.730 "data_size": 65536 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "name": "BaseBdev2", 00:12:04.730 "uuid": "9ffec3d8-f40e-4921-8e5d-51bb4a566a92", 00:12:04.730 "is_configured": true, 00:12:04.730 "data_offset": 0, 00:12:04.730 "data_size": 65536 00:12:04.730 } 00:12:04.730 ] 00:12:04.730 } 00:12:04.730 } 00:12:04.730 }' 00:12:04.730 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.730 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:04.730 BaseBdev2' 00:12:04.730 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:04.730 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:04.730 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:04.990 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:04.990 "name": "BaseBdev1", 00:12:04.990 "aliases": [ 00:12:04.990 "67d4734d-1126-49dd-86b7-85f16fe5ed24" 00:12:04.990 ], 00:12:04.990 "product_name": "Malloc disk", 00:12:04.990 "block_size": 512, 00:12:04.990 "num_blocks": 65536, 00:12:04.990 "uuid": "67d4734d-1126-49dd-86b7-85f16fe5ed24", 00:12:04.990 "assigned_rate_limits": { 00:12:04.990 "rw_ios_per_sec": 0, 00:12:04.990 "rw_mbytes_per_sec": 0, 00:12:04.990 "r_mbytes_per_sec": 0, 00:12:04.990 "w_mbytes_per_sec": 0 00:12:04.990 }, 00:12:04.990 "claimed": true, 00:12:04.990 "claim_type": "exclusive_write", 00:12:04.990 "zoned": false, 00:12:04.990 "supported_io_types": { 00:12:04.990 "read": true, 00:12:04.990 "write": true, 00:12:04.990 "unmap": true, 00:12:04.990 "flush": true, 00:12:04.990 "reset": true, 00:12:04.990 "nvme_admin": false, 00:12:04.990 "nvme_io": false, 00:12:04.990 "nvme_io_md": false, 00:12:04.990 "write_zeroes": true, 00:12:04.990 "zcopy": true, 00:12:04.990 "get_zone_info": false, 00:12:04.990 "zone_management": false, 00:12:04.990 "zone_append": false, 00:12:04.990 "compare": false, 00:12:04.990 "compare_and_write": false, 00:12:04.990 "abort": true, 00:12:04.990 "seek_hole": false, 00:12:04.990 "seek_data": false, 00:12:04.990 "copy": true, 00:12:04.990 "nvme_iov_md": false 00:12:04.990 }, 00:12:04.990 "memory_domains": [ 00:12:04.990 { 00:12:04.990 "dma_device_id": "system", 00:12:04.990 "dma_device_type": 1 00:12:04.990 }, 00:12:04.990 { 00:12:04.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.990 "dma_device_type": 2 00:12:04.990 } 00:12:04.990 ], 00:12:04.990 "driver_specific": {} 00:12:04.990 }' 00:12:04.990 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.990 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.990 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:04.990 11:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.990 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.990 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:04.990 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.990 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:05.249 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:05.508 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:05.508 "name": "BaseBdev2", 00:12:05.508 "aliases": [ 00:12:05.508 "9ffec3d8-f40e-4921-8e5d-51bb4a566a92" 00:12:05.508 ], 00:12:05.508 "product_name": "Malloc disk", 00:12:05.508 "block_size": 512, 00:12:05.508 "num_blocks": 65536, 00:12:05.508 "uuid": "9ffec3d8-f40e-4921-8e5d-51bb4a566a92", 00:12:05.508 "assigned_rate_limits": { 00:12:05.508 "rw_ios_per_sec": 0, 00:12:05.508 "rw_mbytes_per_sec": 0, 00:12:05.508 "r_mbytes_per_sec": 0, 00:12:05.508 "w_mbytes_per_sec": 0 00:12:05.508 }, 00:12:05.508 "claimed": true, 00:12:05.508 "claim_type": "exclusive_write", 00:12:05.508 "zoned": false, 00:12:05.508 "supported_io_types": { 00:12:05.508 "read": true, 00:12:05.508 "write": true, 00:12:05.508 "unmap": true, 00:12:05.508 "flush": true, 00:12:05.508 "reset": true, 00:12:05.508 "nvme_admin": false, 00:12:05.508 "nvme_io": false, 00:12:05.508 "nvme_io_md": false, 00:12:05.508 "write_zeroes": true, 00:12:05.508 "zcopy": true, 00:12:05.508 "get_zone_info": false, 00:12:05.508 "zone_management": false, 00:12:05.508 "zone_append": false, 00:12:05.509 "compare": false, 00:12:05.509 "compare_and_write": false, 00:12:05.509 "abort": true, 00:12:05.509 "seek_hole": false, 00:12:05.509 "seek_data": false, 00:12:05.509 "copy": true, 00:12:05.509 "nvme_iov_md": false 00:12:05.509 }, 00:12:05.509 "memory_domains": [ 00:12:05.509 { 00:12:05.509 "dma_device_id": "system", 00:12:05.509 "dma_device_type": 1 00:12:05.509 }, 00:12:05.509 { 00:12:05.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.509 "dma_device_type": 2 00:12:05.509 } 00:12:05.509 ], 00:12:05.509 "driver_specific": {} 00:12:05.509 }' 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.509 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.769 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:05.769 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.769 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:05.769 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:05.769 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:06.028 [2024-07-25 11:53:51.918824] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.029 11:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.333 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.333 "name": "Existed_Raid", 00:12:06.333 "uuid": "b5f920d7-928f-41c1-8c8e-4c3c86d3fbd5", 00:12:06.333 "strip_size_kb": 0, 00:12:06.333 "state": "online", 00:12:06.333 "raid_level": "raid1", 00:12:06.333 "superblock": false, 00:12:06.333 "num_base_bdevs": 2, 00:12:06.333 "num_base_bdevs_discovered": 1, 00:12:06.333 "num_base_bdevs_operational": 1, 00:12:06.333 "base_bdevs_list": [ 00:12:06.333 { 00:12:06.333 "name": null, 00:12:06.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.333 "is_configured": false, 00:12:06.333 "data_offset": 0, 00:12:06.333 "data_size": 65536 00:12:06.333 }, 00:12:06.333 { 00:12:06.333 "name": "BaseBdev2", 00:12:06.333 "uuid": "9ffec3d8-f40e-4921-8e5d-51bb4a566a92", 00:12:06.333 "is_configured": true, 00:12:06.333 "data_offset": 0, 00:12:06.333 "data_size": 65536 00:12:06.333 } 00:12:06.333 ] 00:12:06.333 }' 00:12:06.333 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.333 11:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.910 11:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:07.169 [2024-07-25 11:53:53.118967] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.169 [2024-07-25 11:53:53.119041] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.169 [2024-07-25 11:53:53.129272] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.169 [2024-07-25 11:53:53.129301] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.169 [2024-07-25 11:53:53.129312] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd23600 name Existed_Raid, state offline 00:12:07.169 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:07.169 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:07.169 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.169 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4108513 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4108513 ']' 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4108513 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4108513 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4108513' 00:12:07.429 killing process with pid 4108513 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4108513 00:12:07.429 [2024-07-25 11:53:53.445580] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.429 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4108513 00:12:07.429 [2024-07-25 11:53:53.446437] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:07.689 00:12:07.689 real 0m9.692s 00:12:07.689 user 0m17.247s 00:12:07.689 sys 0m1.788s 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.689 ************************************ 00:12:07.689 END TEST raid_state_function_test 00:12:07.689 ************************************ 00:12:07.689 11:53:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:07.689 11:53:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:07.689 11:53:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.689 11:53:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.689 ************************************ 00:12:07.689 START TEST raid_state_function_test_sb 00:12:07.689 ************************************ 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4110404 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4110404' 00:12:07.689 Process raid pid: 4110404 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4110404 /var/tmp/spdk-raid.sock 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4110404 ']' 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:07.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.689 11:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.689 [2024-07-25 11:53:53.790657] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:07.689 [2024-07-25 11:53:53.790716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:07.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.948 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:07.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.949 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:07.949 [2024-07-25 11:53:53.922703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.949 [2024-07-25 11:53:54.008958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.208 [2024-07-25 11:53:54.071917] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.208 [2024-07-25 11:53:54.071946] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.775 11:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.775 11:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:08.775 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:09.034 [2024-07-25 11:53:54.899431] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.034 [2024-07-25 11:53:54.899467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.034 [2024-07-25 11:53:54.899477] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.034 [2024-07-25 11:53:54.899488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.034 11:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.034 11:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.034 "name": "Existed_Raid", 00:12:09.034 "uuid": "ab6bf59e-4362-4ea3-ae30-d55748380dcc", 00:12:09.035 "strip_size_kb": 0, 00:12:09.035 "state": "configuring", 00:12:09.035 "raid_level": "raid1", 00:12:09.035 "superblock": true, 00:12:09.035 "num_base_bdevs": 2, 00:12:09.035 "num_base_bdevs_discovered": 0, 00:12:09.035 "num_base_bdevs_operational": 2, 00:12:09.035 "base_bdevs_list": [ 00:12:09.035 { 00:12:09.035 "name": "BaseBdev1", 00:12:09.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.035 "is_configured": false, 00:12:09.035 "data_offset": 0, 00:12:09.035 "data_size": 0 00:12:09.035 }, 00:12:09.035 { 00:12:09.035 "name": "BaseBdev2", 00:12:09.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.035 "is_configured": false, 00:12:09.035 "data_offset": 0, 00:12:09.035 "data_size": 0 00:12:09.035 } 00:12:09.035 ] 00:12:09.035 }' 00:12:09.035 11:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.035 11:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.604 11:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:09.863 [2024-07-25 11:53:55.905943] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.863 [2024-07-25 11:53:55.905970] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a4bf20 name Existed_Raid, state configuring 00:12:09.863 11:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:10.122 [2024-07-25 11:53:56.134554] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.122 [2024-07-25 11:53:56.134580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.122 [2024-07-25 11:53:56.134589] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.122 [2024-07-25 11:53:56.134600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.122 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.425 [2024-07-25 11:53:56.356496] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.425 BaseBdev1 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:10.425 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.684 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.944 [ 00:12:10.944 { 00:12:10.944 "name": "BaseBdev1", 00:12:10.944 "aliases": [ 00:12:10.944 "3197e62b-5000-4d23-a0d3-82d22fb9f80b" 00:12:10.944 ], 00:12:10.944 "product_name": "Malloc disk", 00:12:10.944 "block_size": 512, 00:12:10.944 "num_blocks": 65536, 00:12:10.944 "uuid": "3197e62b-5000-4d23-a0d3-82d22fb9f80b", 00:12:10.944 "assigned_rate_limits": { 00:12:10.944 "rw_ios_per_sec": 0, 00:12:10.944 "rw_mbytes_per_sec": 0, 00:12:10.944 "r_mbytes_per_sec": 0, 00:12:10.944 "w_mbytes_per_sec": 0 00:12:10.944 }, 00:12:10.944 "claimed": true, 00:12:10.944 "claim_type": "exclusive_write", 00:12:10.944 "zoned": false, 00:12:10.944 "supported_io_types": { 00:12:10.944 "read": true, 00:12:10.944 "write": true, 00:12:10.944 "unmap": true, 00:12:10.944 "flush": true, 00:12:10.944 "reset": true, 00:12:10.944 "nvme_admin": false, 00:12:10.944 "nvme_io": false, 00:12:10.944 "nvme_io_md": false, 00:12:10.944 "write_zeroes": true, 00:12:10.944 "zcopy": true, 00:12:10.944 "get_zone_info": false, 00:12:10.944 "zone_management": false, 00:12:10.944 "zone_append": false, 00:12:10.944 "compare": false, 00:12:10.944 "compare_and_write": false, 00:12:10.944 "abort": true, 00:12:10.944 "seek_hole": false, 00:12:10.944 "seek_data": false, 00:12:10.944 "copy": true, 00:12:10.944 "nvme_iov_md": false 00:12:10.944 }, 00:12:10.944 "memory_domains": [ 00:12:10.944 { 00:12:10.944 "dma_device_id": "system", 00:12:10.944 "dma_device_type": 1 00:12:10.944 }, 00:12:10.944 { 00:12:10.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.944 "dma_device_type": 2 00:12:10.944 } 00:12:10.944 ], 00:12:10.944 "driver_specific": {} 00:12:10.944 } 00:12:10.944 ] 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.944 11:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.944 11:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.944 "name": "Existed_Raid", 00:12:10.944 "uuid": "19cc0f3e-4c11-4caf-a06a-cb61de592115", 00:12:10.944 "strip_size_kb": 0, 00:12:10.944 "state": "configuring", 00:12:10.944 "raid_level": "raid1", 00:12:10.944 "superblock": true, 00:12:10.944 "num_base_bdevs": 2, 00:12:10.944 "num_base_bdevs_discovered": 1, 00:12:10.944 "num_base_bdevs_operational": 2, 00:12:10.944 "base_bdevs_list": [ 00:12:10.944 { 00:12:10.944 "name": "BaseBdev1", 00:12:10.944 "uuid": "3197e62b-5000-4d23-a0d3-82d22fb9f80b", 00:12:10.944 "is_configured": true, 00:12:10.944 "data_offset": 2048, 00:12:10.944 "data_size": 63488 00:12:10.944 }, 00:12:10.944 { 00:12:10.944 "name": "BaseBdev2", 00:12:10.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.945 "is_configured": false, 00:12:10.945 "data_offset": 0, 00:12:10.945 "data_size": 0 00:12:10.945 } 00:12:10.945 ] 00:12:10.945 }' 00:12:10.945 11:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.945 11:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.512 11:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:11.771 [2024-07-25 11:53:57.824550] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.771 [2024-07-25 11:53:57.824584] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a4b810 name Existed_Raid, state configuring 00:12:11.772 11:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:12.031 [2024-07-25 11:53:58.053187] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.031 [2024-07-25 11:53:58.054603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.031 [2024-07-25 11:53:58.054634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.031 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.291 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.291 "name": "Existed_Raid", 00:12:12.291 "uuid": "c68b7533-56b5-4d5d-bfeb-f3b1d11c0d59", 00:12:12.291 "strip_size_kb": 0, 00:12:12.291 "state": "configuring", 00:12:12.291 "raid_level": "raid1", 00:12:12.291 "superblock": true, 00:12:12.291 "num_base_bdevs": 2, 00:12:12.291 "num_base_bdevs_discovered": 1, 00:12:12.291 "num_base_bdevs_operational": 2, 00:12:12.291 "base_bdevs_list": [ 00:12:12.291 { 00:12:12.291 "name": "BaseBdev1", 00:12:12.291 "uuid": "3197e62b-5000-4d23-a0d3-82d22fb9f80b", 00:12:12.291 "is_configured": true, 00:12:12.291 "data_offset": 2048, 00:12:12.291 "data_size": 63488 00:12:12.291 }, 00:12:12.291 { 00:12:12.291 "name": "BaseBdev2", 00:12:12.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.291 "is_configured": false, 00:12:12.291 "data_offset": 0, 00:12:12.291 "data_size": 0 00:12:12.291 } 00:12:12.291 ] 00:12:12.291 }' 00:12:12.291 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.291 11:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.859 11:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.118 [2024-07-25 11:53:59.082989] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.118 [2024-07-25 11:53:59.083121] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a4c600 00:12:13.118 [2024-07-25 11:53:59.083133] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.118 [2024-07-25 11:53:59.083305] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a4d9c0 00:12:13.118 [2024-07-25 11:53:59.083418] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a4c600 00:12:13.118 [2024-07-25 11:53:59.083427] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a4c600 00:12:13.118 [2024-07-25 11:53:59.083515] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.118 BaseBdev2 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:13.118 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:13.376 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.636 [ 00:12:13.636 { 00:12:13.636 "name": "BaseBdev2", 00:12:13.636 "aliases": [ 00:12:13.636 "af9ac81e-39af-4f0e-8f2c-ddf543eac64f" 00:12:13.636 ], 00:12:13.636 "product_name": "Malloc disk", 00:12:13.636 "block_size": 512, 00:12:13.636 "num_blocks": 65536, 00:12:13.636 "uuid": "af9ac81e-39af-4f0e-8f2c-ddf543eac64f", 00:12:13.636 "assigned_rate_limits": { 00:12:13.636 "rw_ios_per_sec": 0, 00:12:13.636 "rw_mbytes_per_sec": 0, 00:12:13.636 "r_mbytes_per_sec": 0, 00:12:13.636 "w_mbytes_per_sec": 0 00:12:13.636 }, 00:12:13.636 "claimed": true, 00:12:13.636 "claim_type": "exclusive_write", 00:12:13.636 "zoned": false, 00:12:13.636 "supported_io_types": { 00:12:13.636 "read": true, 00:12:13.636 "write": true, 00:12:13.636 "unmap": true, 00:12:13.636 "flush": true, 00:12:13.636 "reset": true, 00:12:13.636 "nvme_admin": false, 00:12:13.636 "nvme_io": false, 00:12:13.636 "nvme_io_md": false, 00:12:13.636 "write_zeroes": true, 00:12:13.636 "zcopy": true, 00:12:13.636 "get_zone_info": false, 00:12:13.636 "zone_management": false, 00:12:13.636 "zone_append": false, 00:12:13.636 "compare": false, 00:12:13.636 "compare_and_write": false, 00:12:13.636 "abort": true, 00:12:13.636 "seek_hole": false, 00:12:13.636 "seek_data": false, 00:12:13.636 "copy": true, 00:12:13.636 "nvme_iov_md": false 00:12:13.636 }, 00:12:13.636 "memory_domains": [ 00:12:13.636 { 00:12:13.636 "dma_device_id": "system", 00:12:13.636 "dma_device_type": 1 00:12:13.636 }, 00:12:13.636 { 00:12:13.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.636 "dma_device_type": 2 00:12:13.636 } 00:12:13.636 ], 00:12:13.636 "driver_specific": {} 00:12:13.636 } 00:12:13.636 ] 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.636 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.896 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.896 "name": "Existed_Raid", 00:12:13.896 "uuid": "c68b7533-56b5-4d5d-bfeb-f3b1d11c0d59", 00:12:13.896 "strip_size_kb": 0, 00:12:13.896 "state": "online", 00:12:13.896 "raid_level": "raid1", 00:12:13.896 "superblock": true, 00:12:13.896 "num_base_bdevs": 2, 00:12:13.896 "num_base_bdevs_discovered": 2, 00:12:13.896 "num_base_bdevs_operational": 2, 00:12:13.896 "base_bdevs_list": [ 00:12:13.896 { 00:12:13.896 "name": "BaseBdev1", 00:12:13.896 "uuid": "3197e62b-5000-4d23-a0d3-82d22fb9f80b", 00:12:13.896 "is_configured": true, 00:12:13.896 "data_offset": 2048, 00:12:13.896 "data_size": 63488 00:12:13.896 }, 00:12:13.896 { 00:12:13.896 "name": "BaseBdev2", 00:12:13.896 "uuid": "af9ac81e-39af-4f0e-8f2c-ddf543eac64f", 00:12:13.896 "is_configured": true, 00:12:13.896 "data_offset": 2048, 00:12:13.896 "data_size": 63488 00:12:13.896 } 00:12:13.896 ] 00:12:13.896 }' 00:12:13.896 11:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.896 11:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:14.463 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:14.463 [2024-07-25 11:54:00.571157] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.722 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:14.722 "name": "Existed_Raid", 00:12:14.722 "aliases": [ 00:12:14.722 "c68b7533-56b5-4d5d-bfeb-f3b1d11c0d59" 00:12:14.722 ], 00:12:14.722 "product_name": "Raid Volume", 00:12:14.722 "block_size": 512, 00:12:14.722 "num_blocks": 63488, 00:12:14.722 "uuid": "c68b7533-56b5-4d5d-bfeb-f3b1d11c0d59", 00:12:14.722 "assigned_rate_limits": { 00:12:14.722 "rw_ios_per_sec": 0, 00:12:14.722 "rw_mbytes_per_sec": 0, 00:12:14.722 "r_mbytes_per_sec": 0, 00:12:14.722 "w_mbytes_per_sec": 0 00:12:14.722 }, 00:12:14.722 "claimed": false, 00:12:14.722 "zoned": false, 00:12:14.722 "supported_io_types": { 00:12:14.722 "read": true, 00:12:14.722 "write": true, 00:12:14.722 "unmap": false, 00:12:14.722 "flush": false, 00:12:14.722 "reset": true, 00:12:14.722 "nvme_admin": false, 00:12:14.722 "nvme_io": false, 00:12:14.722 "nvme_io_md": false, 00:12:14.722 "write_zeroes": true, 00:12:14.722 "zcopy": false, 00:12:14.722 "get_zone_info": false, 00:12:14.722 "zone_management": false, 00:12:14.722 "zone_append": false, 00:12:14.722 "compare": false, 00:12:14.722 "compare_and_write": false, 00:12:14.722 "abort": false, 00:12:14.722 "seek_hole": false, 00:12:14.722 "seek_data": false, 00:12:14.722 "copy": false, 00:12:14.722 "nvme_iov_md": false 00:12:14.722 }, 00:12:14.722 "memory_domains": [ 00:12:14.722 { 00:12:14.722 "dma_device_id": "system", 00:12:14.722 "dma_device_type": 1 00:12:14.722 }, 00:12:14.722 { 00:12:14.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.722 "dma_device_type": 2 00:12:14.722 }, 00:12:14.722 { 00:12:14.722 "dma_device_id": "system", 00:12:14.722 "dma_device_type": 1 00:12:14.722 }, 00:12:14.722 { 00:12:14.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.722 "dma_device_type": 2 00:12:14.722 } 00:12:14.722 ], 00:12:14.722 "driver_specific": { 00:12:14.722 "raid": { 00:12:14.722 "uuid": "c68b7533-56b5-4d5d-bfeb-f3b1d11c0d59", 00:12:14.722 "strip_size_kb": 0, 00:12:14.722 "state": "online", 00:12:14.722 "raid_level": "raid1", 00:12:14.722 "superblock": true, 00:12:14.722 "num_base_bdevs": 2, 00:12:14.722 "num_base_bdevs_discovered": 2, 00:12:14.722 "num_base_bdevs_operational": 2, 00:12:14.722 "base_bdevs_list": [ 00:12:14.722 { 00:12:14.722 "name": "BaseBdev1", 00:12:14.722 "uuid": "3197e62b-5000-4d23-a0d3-82d22fb9f80b", 00:12:14.722 "is_configured": true, 00:12:14.722 "data_offset": 2048, 00:12:14.722 "data_size": 63488 00:12:14.722 }, 00:12:14.722 { 00:12:14.722 "name": "BaseBdev2", 00:12:14.722 "uuid": "af9ac81e-39af-4f0e-8f2c-ddf543eac64f", 00:12:14.722 "is_configured": true, 00:12:14.722 "data_offset": 2048, 00:12:14.722 "data_size": 63488 00:12:14.722 } 00:12:14.722 ] 00:12:14.722 } 00:12:14.722 } 00:12:14.722 }' 00:12:14.722 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.723 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:14.723 BaseBdev2' 00:12:14.723 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.723 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:14.723 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.981 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.981 "name": "BaseBdev1", 00:12:14.981 "aliases": [ 00:12:14.981 "3197e62b-5000-4d23-a0d3-82d22fb9f80b" 00:12:14.981 ], 00:12:14.981 "product_name": "Malloc disk", 00:12:14.981 "block_size": 512, 00:12:14.981 "num_blocks": 65536, 00:12:14.981 "uuid": "3197e62b-5000-4d23-a0d3-82d22fb9f80b", 00:12:14.981 "assigned_rate_limits": { 00:12:14.981 "rw_ios_per_sec": 0, 00:12:14.981 "rw_mbytes_per_sec": 0, 00:12:14.981 "r_mbytes_per_sec": 0, 00:12:14.981 "w_mbytes_per_sec": 0 00:12:14.981 }, 00:12:14.981 "claimed": true, 00:12:14.981 "claim_type": "exclusive_write", 00:12:14.981 "zoned": false, 00:12:14.981 "supported_io_types": { 00:12:14.981 "read": true, 00:12:14.981 "write": true, 00:12:14.981 "unmap": true, 00:12:14.981 "flush": true, 00:12:14.981 "reset": true, 00:12:14.981 "nvme_admin": false, 00:12:14.981 "nvme_io": false, 00:12:14.981 "nvme_io_md": false, 00:12:14.981 "write_zeroes": true, 00:12:14.981 "zcopy": true, 00:12:14.981 "get_zone_info": false, 00:12:14.981 "zone_management": false, 00:12:14.981 "zone_append": false, 00:12:14.981 "compare": false, 00:12:14.981 "compare_and_write": false, 00:12:14.981 "abort": true, 00:12:14.981 "seek_hole": false, 00:12:14.981 "seek_data": false, 00:12:14.981 "copy": true, 00:12:14.981 "nvme_iov_md": false 00:12:14.981 }, 00:12:14.981 "memory_domains": [ 00:12:14.981 { 00:12:14.981 "dma_device_id": "system", 00:12:14.981 "dma_device_type": 1 00:12:14.981 }, 00:12:14.981 { 00:12:14.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.981 "dma_device_type": 2 00:12:14.981 } 00:12:14.981 ], 00:12:14.981 "driver_specific": {} 00:12:14.981 }' 00:12:14.981 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.981 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.981 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.981 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.981 11:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.981 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.981 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.981 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:15.240 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.498 "name": "BaseBdev2", 00:12:15.498 "aliases": [ 00:12:15.498 "af9ac81e-39af-4f0e-8f2c-ddf543eac64f" 00:12:15.498 ], 00:12:15.498 "product_name": "Malloc disk", 00:12:15.498 "block_size": 512, 00:12:15.498 "num_blocks": 65536, 00:12:15.498 "uuid": "af9ac81e-39af-4f0e-8f2c-ddf543eac64f", 00:12:15.498 "assigned_rate_limits": { 00:12:15.498 "rw_ios_per_sec": 0, 00:12:15.498 "rw_mbytes_per_sec": 0, 00:12:15.498 "r_mbytes_per_sec": 0, 00:12:15.498 "w_mbytes_per_sec": 0 00:12:15.498 }, 00:12:15.498 "claimed": true, 00:12:15.498 "claim_type": "exclusive_write", 00:12:15.498 "zoned": false, 00:12:15.498 "supported_io_types": { 00:12:15.498 "read": true, 00:12:15.498 "write": true, 00:12:15.498 "unmap": true, 00:12:15.498 "flush": true, 00:12:15.498 "reset": true, 00:12:15.498 "nvme_admin": false, 00:12:15.498 "nvme_io": false, 00:12:15.498 "nvme_io_md": false, 00:12:15.498 "write_zeroes": true, 00:12:15.498 "zcopy": true, 00:12:15.498 "get_zone_info": false, 00:12:15.498 "zone_management": false, 00:12:15.498 "zone_append": false, 00:12:15.498 "compare": false, 00:12:15.498 "compare_and_write": false, 00:12:15.498 "abort": true, 00:12:15.498 "seek_hole": false, 00:12:15.498 "seek_data": false, 00:12:15.498 "copy": true, 00:12:15.498 "nvme_iov_md": false 00:12:15.498 }, 00:12:15.498 "memory_domains": [ 00:12:15.498 { 00:12:15.498 "dma_device_id": "system", 00:12:15.498 "dma_device_type": 1 00:12:15.498 }, 00:12:15.498 { 00:12:15.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.498 "dma_device_type": 2 00:12:15.498 } 00:12:15.498 ], 00:12:15.498 "driver_specific": {} 00:12:15.498 }' 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.498 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.757 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.757 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.757 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.757 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.757 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.757 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:16.016 [2024-07-25 11:54:01.970620] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.016 11:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.275 11:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.275 "name": "Existed_Raid", 00:12:16.275 "uuid": "c68b7533-56b5-4d5d-bfeb-f3b1d11c0d59", 00:12:16.275 "strip_size_kb": 0, 00:12:16.275 "state": "online", 00:12:16.275 "raid_level": "raid1", 00:12:16.275 "superblock": true, 00:12:16.275 "num_base_bdevs": 2, 00:12:16.275 "num_base_bdevs_discovered": 1, 00:12:16.275 "num_base_bdevs_operational": 1, 00:12:16.275 "base_bdevs_list": [ 00:12:16.275 { 00:12:16.275 "name": null, 00:12:16.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.275 "is_configured": false, 00:12:16.275 "data_offset": 2048, 00:12:16.275 "data_size": 63488 00:12:16.275 }, 00:12:16.275 { 00:12:16.275 "name": "BaseBdev2", 00:12:16.275 "uuid": "af9ac81e-39af-4f0e-8f2c-ddf543eac64f", 00:12:16.275 "is_configured": true, 00:12:16.275 "data_offset": 2048, 00:12:16.275 "data_size": 63488 00:12:16.275 } 00:12:16.275 ] 00:12:16.275 }' 00:12:16.275 11:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.275 11:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.843 11:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:16.843 11:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:16.843 11:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.843 11:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:17.103 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:17.103 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.103 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:17.363 [2024-07-25 11:54:03.222952] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.363 [2024-07-25 11:54:03.223025] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.363 [2024-07-25 11:54:03.233192] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.363 [2024-07-25 11:54:03.233224] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.363 [2024-07-25 11:54:03.233234] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a4c600 name Existed_Raid, state offline 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4110404 00:12:17.363 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4110404 ']' 00:12:17.364 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4110404 00:12:17.364 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:17.364 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.364 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4110404 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4110404' 00:12:17.625 killing process with pid 4110404 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4110404 00:12:17.625 [2024-07-25 11:54:03.525700] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4110404 00:12:17.625 [2024-07-25 11:54:03.526550] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:17.625 00:12:17.625 real 0m9.990s 00:12:17.625 user 0m17.734s 00:12:17.625 sys 0m1.908s 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.625 11:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.625 ************************************ 00:12:17.625 END TEST raid_state_function_test_sb 00:12:17.625 ************************************ 00:12:17.885 11:54:03 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:17.885 11:54:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:17.885 11:54:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.885 11:54:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.885 ************************************ 00:12:17.885 START TEST raid_superblock_test 00:12:17.885 ************************************ 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4112526 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4112526 /var/tmp/spdk-raid.sock 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4112526 ']' 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:17.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.885 11:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:17.885 [2024-07-25 11:54:03.857207] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:17.885 [2024-07-25 11:54:03.857267] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112526 ] 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:17.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:17.885 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:17.885 [2024-07-25 11:54:03.990527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.146 [2024-07-25 11:54:04.079260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.146 [2024-07-25 11:54:04.136672] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.146 [2024-07-25 11:54:04.136709] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:18.752 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:19.011 malloc1 00:12:19.011 11:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.269 [2024-07-25 11:54:05.202030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.269 [2024-07-25 11:54:05.202073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.269 [2024-07-25 11:54:05.202093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x231a2f0 00:12:19.269 [2024-07-25 11:54:05.202105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.269 [2024-07-25 11:54:05.203691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.269 [2024-07-25 11:54:05.203720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.269 pt1 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.269 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:19.527 malloc2 00:12:19.527 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.785 [2024-07-25 11:54:05.659654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.785 [2024-07-25 11:54:05.659693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.785 [2024-07-25 11:54:05.659707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x231b6d0 00:12:19.785 [2024-07-25 11:54:05.659718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.785 [2024-07-25 11:54:05.661152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.785 [2024-07-25 11:54:05.661179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.785 pt2 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:12:19.785 [2024-07-25 11:54:05.872236] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:19.785 [2024-07-25 11:54:05.873337] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.785 [2024-07-25 11:54:05.873463] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x24b4310 00:12:19.785 [2024-07-25 11:54:05.873476] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.785 [2024-07-25 11:54:05.873655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b3ce0 00:12:19.785 [2024-07-25 11:54:05.873784] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24b4310 00:12:19.785 [2024-07-25 11:54:05.873794] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24b4310 00:12:19.785 [2024-07-25 11:54:05.873881] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.785 11:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.043 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.043 "name": "raid_bdev1", 00:12:20.043 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:20.043 "strip_size_kb": 0, 00:12:20.043 "state": "online", 00:12:20.043 "raid_level": "raid1", 00:12:20.043 "superblock": true, 00:12:20.043 "num_base_bdevs": 2, 00:12:20.043 "num_base_bdevs_discovered": 2, 00:12:20.043 "num_base_bdevs_operational": 2, 00:12:20.043 "base_bdevs_list": [ 00:12:20.043 { 00:12:20.043 "name": "pt1", 00:12:20.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.043 "is_configured": true, 00:12:20.043 "data_offset": 2048, 00:12:20.043 "data_size": 63488 00:12:20.043 }, 00:12:20.043 { 00:12:20.043 "name": "pt2", 00:12:20.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.043 "is_configured": true, 00:12:20.043 "data_offset": 2048, 00:12:20.043 "data_size": 63488 00:12:20.043 } 00:12:20.043 ] 00:12:20.043 }' 00:12:20.043 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.043 11:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.609 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:20.609 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:20.609 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:20.609 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:20.609 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:20.609 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:20.610 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:20.610 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:20.868 [2024-07-25 11:54:06.871057] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.868 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:20.868 "name": "raid_bdev1", 00:12:20.868 "aliases": [ 00:12:20.868 "418cb78f-79d1-4bde-a3b3-73ce3b6dd418" 00:12:20.868 ], 00:12:20.868 "product_name": "Raid Volume", 00:12:20.868 "block_size": 512, 00:12:20.868 "num_blocks": 63488, 00:12:20.868 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:20.868 "assigned_rate_limits": { 00:12:20.868 "rw_ios_per_sec": 0, 00:12:20.868 "rw_mbytes_per_sec": 0, 00:12:20.868 "r_mbytes_per_sec": 0, 00:12:20.868 "w_mbytes_per_sec": 0 00:12:20.868 }, 00:12:20.868 "claimed": false, 00:12:20.868 "zoned": false, 00:12:20.868 "supported_io_types": { 00:12:20.868 "read": true, 00:12:20.868 "write": true, 00:12:20.868 "unmap": false, 00:12:20.868 "flush": false, 00:12:20.868 "reset": true, 00:12:20.868 "nvme_admin": false, 00:12:20.868 "nvme_io": false, 00:12:20.868 "nvme_io_md": false, 00:12:20.868 "write_zeroes": true, 00:12:20.868 "zcopy": false, 00:12:20.868 "get_zone_info": false, 00:12:20.868 "zone_management": false, 00:12:20.868 "zone_append": false, 00:12:20.868 "compare": false, 00:12:20.868 "compare_and_write": false, 00:12:20.868 "abort": false, 00:12:20.868 "seek_hole": false, 00:12:20.868 "seek_data": false, 00:12:20.868 "copy": false, 00:12:20.868 "nvme_iov_md": false 00:12:20.868 }, 00:12:20.868 "memory_domains": [ 00:12:20.868 { 00:12:20.868 "dma_device_id": "system", 00:12:20.868 "dma_device_type": 1 00:12:20.868 }, 00:12:20.868 { 00:12:20.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.868 "dma_device_type": 2 00:12:20.868 }, 00:12:20.868 { 00:12:20.868 "dma_device_id": "system", 00:12:20.868 "dma_device_type": 1 00:12:20.868 }, 00:12:20.868 { 00:12:20.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.868 "dma_device_type": 2 00:12:20.868 } 00:12:20.868 ], 00:12:20.868 "driver_specific": { 00:12:20.868 "raid": { 00:12:20.868 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:20.868 "strip_size_kb": 0, 00:12:20.868 "state": "online", 00:12:20.868 "raid_level": "raid1", 00:12:20.868 "superblock": true, 00:12:20.868 "num_base_bdevs": 2, 00:12:20.868 "num_base_bdevs_discovered": 2, 00:12:20.868 "num_base_bdevs_operational": 2, 00:12:20.868 "base_bdevs_list": [ 00:12:20.868 { 00:12:20.868 "name": "pt1", 00:12:20.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.868 "is_configured": true, 00:12:20.868 "data_offset": 2048, 00:12:20.868 "data_size": 63488 00:12:20.868 }, 00:12:20.868 { 00:12:20.868 "name": "pt2", 00:12:20.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.868 "is_configured": true, 00:12:20.868 "data_offset": 2048, 00:12:20.868 "data_size": 63488 00:12:20.868 } 00:12:20.868 ] 00:12:20.868 } 00:12:20.868 } 00:12:20.868 }' 00:12:20.868 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.868 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:20.868 pt2' 00:12:20.868 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:20.868 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:20.868 11:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:21.127 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:21.127 "name": "pt1", 00:12:21.127 "aliases": [ 00:12:21.127 "00000000-0000-0000-0000-000000000001" 00:12:21.127 ], 00:12:21.127 "product_name": "passthru", 00:12:21.127 "block_size": 512, 00:12:21.127 "num_blocks": 65536, 00:12:21.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.127 "assigned_rate_limits": { 00:12:21.127 "rw_ios_per_sec": 0, 00:12:21.127 "rw_mbytes_per_sec": 0, 00:12:21.127 "r_mbytes_per_sec": 0, 00:12:21.127 "w_mbytes_per_sec": 0 00:12:21.127 }, 00:12:21.127 "claimed": true, 00:12:21.127 "claim_type": "exclusive_write", 00:12:21.127 "zoned": false, 00:12:21.127 "supported_io_types": { 00:12:21.127 "read": true, 00:12:21.127 "write": true, 00:12:21.127 "unmap": true, 00:12:21.127 "flush": true, 00:12:21.127 "reset": true, 00:12:21.127 "nvme_admin": false, 00:12:21.127 "nvme_io": false, 00:12:21.127 "nvme_io_md": false, 00:12:21.127 "write_zeroes": true, 00:12:21.127 "zcopy": true, 00:12:21.127 "get_zone_info": false, 00:12:21.127 "zone_management": false, 00:12:21.127 "zone_append": false, 00:12:21.127 "compare": false, 00:12:21.127 "compare_and_write": false, 00:12:21.127 "abort": true, 00:12:21.127 "seek_hole": false, 00:12:21.127 "seek_data": false, 00:12:21.127 "copy": true, 00:12:21.127 "nvme_iov_md": false 00:12:21.127 }, 00:12:21.127 "memory_domains": [ 00:12:21.127 { 00:12:21.127 "dma_device_id": "system", 00:12:21.127 "dma_device_type": 1 00:12:21.127 }, 00:12:21.127 { 00:12:21.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.127 "dma_device_type": 2 00:12:21.127 } 00:12:21.127 ], 00:12:21.127 "driver_specific": { 00:12:21.127 "passthru": { 00:12:21.127 "name": "pt1", 00:12:21.127 "base_bdev_name": "malloc1" 00:12:21.127 } 00:12:21.127 } 00:12:21.127 }' 00:12:21.127 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:21.127 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:21.386 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:21.386 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.386 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.386 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:21.386 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.387 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.387 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:21.387 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.387 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.675 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:21.675 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:21.675 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:21.675 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:21.675 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:21.675 "name": "pt2", 00:12:21.675 "aliases": [ 00:12:21.675 "00000000-0000-0000-0000-000000000002" 00:12:21.675 ], 00:12:21.675 "product_name": "passthru", 00:12:21.675 "block_size": 512, 00:12:21.675 "num_blocks": 65536, 00:12:21.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.675 "assigned_rate_limits": { 00:12:21.675 "rw_ios_per_sec": 0, 00:12:21.675 "rw_mbytes_per_sec": 0, 00:12:21.675 "r_mbytes_per_sec": 0, 00:12:21.675 "w_mbytes_per_sec": 0 00:12:21.675 }, 00:12:21.675 "claimed": true, 00:12:21.675 "claim_type": "exclusive_write", 00:12:21.675 "zoned": false, 00:12:21.675 "supported_io_types": { 00:12:21.675 "read": true, 00:12:21.675 "write": true, 00:12:21.675 "unmap": true, 00:12:21.675 "flush": true, 00:12:21.675 "reset": true, 00:12:21.675 "nvme_admin": false, 00:12:21.675 "nvme_io": false, 00:12:21.675 "nvme_io_md": false, 00:12:21.675 "write_zeroes": true, 00:12:21.675 "zcopy": true, 00:12:21.675 "get_zone_info": false, 00:12:21.675 "zone_management": false, 00:12:21.675 "zone_append": false, 00:12:21.675 "compare": false, 00:12:21.675 "compare_and_write": false, 00:12:21.675 "abort": true, 00:12:21.675 "seek_hole": false, 00:12:21.675 "seek_data": false, 00:12:21.675 "copy": true, 00:12:21.675 "nvme_iov_md": false 00:12:21.675 }, 00:12:21.675 "memory_domains": [ 00:12:21.675 { 00:12:21.675 "dma_device_id": "system", 00:12:21.675 "dma_device_type": 1 00:12:21.675 }, 00:12:21.675 { 00:12:21.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.675 "dma_device_type": 2 00:12:21.675 } 00:12:21.675 ], 00:12:21.675 "driver_specific": { 00:12:21.675 "passthru": { 00:12:21.675 "name": "pt2", 00:12:21.675 "base_bdev_name": "malloc2" 00:12:21.675 } 00:12:21.675 } 00:12:21.675 }' 00:12:21.675 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:21.933 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:21.933 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:21.933 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.934 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.934 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:21.934 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.934 11:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.934 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:21.934 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.934 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.192 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.192 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:22.192 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:22.192 [2024-07-25 11:54:08.298808] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.449 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=418cb78f-79d1-4bde-a3b3-73ce3b6dd418 00:12:22.450 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 418cb78f-79d1-4bde-a3b3-73ce3b6dd418 ']' 00:12:22.450 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:22.450 [2024-07-25 11:54:08.527180] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.450 [2024-07-25 11:54:08.527197] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.450 [2024-07-25 11:54:08.527247] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.450 [2024-07-25 11:54:08.527297] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.450 [2024-07-25 11:54:08.527308] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24b4310 name raid_bdev1, state offline 00:12:22.450 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.450 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:22.707 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:22.707 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:22.707 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.707 11:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:22.964 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.964 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:23.222 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:23.222 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:23.480 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:23.738 [2024-07-25 11:54:09.682178] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:23.738 [2024-07-25 11:54:09.683402] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:23.738 [2024-07-25 11:54:09.683451] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:23.738 [2024-07-25 11:54:09.683488] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:23.738 [2024-07-25 11:54:09.683506] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.738 [2024-07-25 11:54:09.683515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24bd3f0 name raid_bdev1, state configuring 00:12:23.738 request: 00:12:23.738 { 00:12:23.738 "name": "raid_bdev1", 00:12:23.738 "raid_level": "raid1", 00:12:23.738 "base_bdevs": [ 00:12:23.738 "malloc1", 00:12:23.738 "malloc2" 00:12:23.738 ], 00:12:23.738 "superblock": false, 00:12:23.738 "method": "bdev_raid_create", 00:12:23.738 "req_id": 1 00:12:23.738 } 00:12:23.738 Got JSON-RPC error response 00:12:23.738 response: 00:12:23.738 { 00:12:23.738 "code": -17, 00:12:23.738 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:23.738 } 00:12:23.738 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:23.738 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:23.738 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:23.738 11:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:23.738 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.738 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:23.996 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:23.996 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:23.996 11:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:24.253 [2024-07-25 11:54:10.135317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:24.253 [2024-07-25 11:54:10.135360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.253 [2024-07-25 11:54:10.135375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24bdd70 00:12:24.253 [2024-07-25 11:54:10.135386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.253 [2024-07-25 11:54:10.136861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.253 [2024-07-25 11:54:10.136887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:24.253 [2024-07-25 11:54:10.136946] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:24.253 [2024-07-25 11:54:10.136970] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:24.253 pt1 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.253 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.254 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.511 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.511 "name": "raid_bdev1", 00:12:24.511 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:24.511 "strip_size_kb": 0, 00:12:24.511 "state": "configuring", 00:12:24.511 "raid_level": "raid1", 00:12:24.511 "superblock": true, 00:12:24.511 "num_base_bdevs": 2, 00:12:24.511 "num_base_bdevs_discovered": 1, 00:12:24.511 "num_base_bdevs_operational": 2, 00:12:24.511 "base_bdevs_list": [ 00:12:24.511 { 00:12:24.511 "name": "pt1", 00:12:24.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.512 "is_configured": true, 00:12:24.512 "data_offset": 2048, 00:12:24.512 "data_size": 63488 00:12:24.512 }, 00:12:24.512 { 00:12:24.512 "name": null, 00:12:24.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.512 "is_configured": false, 00:12:24.512 "data_offset": 2048, 00:12:24.512 "data_size": 63488 00:12:24.512 } 00:12:24.512 ] 00:12:24.512 }' 00:12:24.512 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.512 11:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.077 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:12:25.077 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:25.077 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:25.077 11:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.077 [2024-07-25 11:54:11.166040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.077 [2024-07-25 11:54:11.166083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.077 [2024-07-25 11:54:11.166099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24b4bb0 00:12:25.077 [2024-07-25 11:54:11.166110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.077 [2024-07-25 11:54:11.166427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.078 [2024-07-25 11:54:11.166445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.078 [2024-07-25 11:54:11.166502] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:25.078 [2024-07-25 11:54:11.166520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.078 [2024-07-25 11:54:11.166607] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x24b2de0 00:12:25.078 [2024-07-25 11:54:11.166617] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.078 [2024-07-25 11:54:11.166766] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2313eb0 00:12:25.078 [2024-07-25 11:54:11.166888] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24b2de0 00:12:25.078 [2024-07-25 11:54:11.166898] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24b2de0 00:12:25.078 [2024-07-25 11:54:11.166986] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.078 pt2 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.078 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.336 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:25.336 "name": "raid_bdev1", 00:12:25.336 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:25.336 "strip_size_kb": 0, 00:12:25.336 "state": "online", 00:12:25.336 "raid_level": "raid1", 00:12:25.336 "superblock": true, 00:12:25.336 "num_base_bdevs": 2, 00:12:25.336 "num_base_bdevs_discovered": 2, 00:12:25.336 "num_base_bdevs_operational": 2, 00:12:25.336 "base_bdevs_list": [ 00:12:25.336 { 00:12:25.336 "name": "pt1", 00:12:25.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.336 "is_configured": true, 00:12:25.336 "data_offset": 2048, 00:12:25.336 "data_size": 63488 00:12:25.336 }, 00:12:25.336 { 00:12:25.336 "name": "pt2", 00:12:25.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.336 "is_configured": true, 00:12:25.336 "data_offset": 2048, 00:12:25.336 "data_size": 63488 00:12:25.336 } 00:12:25.336 ] 00:12:25.336 }' 00:12:25.336 11:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:25.336 11:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:25.901 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:26.159 [2024-07-25 11:54:12.217020] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.159 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:26.159 "name": "raid_bdev1", 00:12:26.159 "aliases": [ 00:12:26.159 "418cb78f-79d1-4bde-a3b3-73ce3b6dd418" 00:12:26.159 ], 00:12:26.159 "product_name": "Raid Volume", 00:12:26.159 "block_size": 512, 00:12:26.159 "num_blocks": 63488, 00:12:26.159 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:26.159 "assigned_rate_limits": { 00:12:26.159 "rw_ios_per_sec": 0, 00:12:26.159 "rw_mbytes_per_sec": 0, 00:12:26.159 "r_mbytes_per_sec": 0, 00:12:26.159 "w_mbytes_per_sec": 0 00:12:26.159 }, 00:12:26.159 "claimed": false, 00:12:26.159 "zoned": false, 00:12:26.159 "supported_io_types": { 00:12:26.159 "read": true, 00:12:26.159 "write": true, 00:12:26.159 "unmap": false, 00:12:26.159 "flush": false, 00:12:26.159 "reset": true, 00:12:26.159 "nvme_admin": false, 00:12:26.159 "nvme_io": false, 00:12:26.159 "nvme_io_md": false, 00:12:26.159 "write_zeroes": true, 00:12:26.159 "zcopy": false, 00:12:26.159 "get_zone_info": false, 00:12:26.159 "zone_management": false, 00:12:26.159 "zone_append": false, 00:12:26.159 "compare": false, 00:12:26.159 "compare_and_write": false, 00:12:26.159 "abort": false, 00:12:26.159 "seek_hole": false, 00:12:26.159 "seek_data": false, 00:12:26.159 "copy": false, 00:12:26.159 "nvme_iov_md": false 00:12:26.159 }, 00:12:26.159 "memory_domains": [ 00:12:26.159 { 00:12:26.159 "dma_device_id": "system", 00:12:26.159 "dma_device_type": 1 00:12:26.159 }, 00:12:26.159 { 00:12:26.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.159 "dma_device_type": 2 00:12:26.159 }, 00:12:26.159 { 00:12:26.159 "dma_device_id": "system", 00:12:26.159 "dma_device_type": 1 00:12:26.159 }, 00:12:26.159 { 00:12:26.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.159 "dma_device_type": 2 00:12:26.159 } 00:12:26.159 ], 00:12:26.159 "driver_specific": { 00:12:26.159 "raid": { 00:12:26.159 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:26.159 "strip_size_kb": 0, 00:12:26.159 "state": "online", 00:12:26.159 "raid_level": "raid1", 00:12:26.159 "superblock": true, 00:12:26.159 "num_base_bdevs": 2, 00:12:26.159 "num_base_bdevs_discovered": 2, 00:12:26.159 "num_base_bdevs_operational": 2, 00:12:26.159 "base_bdevs_list": [ 00:12:26.159 { 00:12:26.159 "name": "pt1", 00:12:26.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.159 "is_configured": true, 00:12:26.159 "data_offset": 2048, 00:12:26.159 "data_size": 63488 00:12:26.159 }, 00:12:26.159 { 00:12:26.159 "name": "pt2", 00:12:26.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.159 "is_configured": true, 00:12:26.159 "data_offset": 2048, 00:12:26.159 "data_size": 63488 00:12:26.159 } 00:12:26.159 ] 00:12:26.159 } 00:12:26.159 } 00:12:26.159 }' 00:12:26.159 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.417 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:26.417 pt2' 00:12:26.417 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.417 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:26.417 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.417 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.417 "name": "pt1", 00:12:26.417 "aliases": [ 00:12:26.417 "00000000-0000-0000-0000-000000000001" 00:12:26.417 ], 00:12:26.417 "product_name": "passthru", 00:12:26.417 "block_size": 512, 00:12:26.417 "num_blocks": 65536, 00:12:26.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.417 "assigned_rate_limits": { 00:12:26.417 "rw_ios_per_sec": 0, 00:12:26.417 "rw_mbytes_per_sec": 0, 00:12:26.417 "r_mbytes_per_sec": 0, 00:12:26.417 "w_mbytes_per_sec": 0 00:12:26.417 }, 00:12:26.417 "claimed": true, 00:12:26.417 "claim_type": "exclusive_write", 00:12:26.417 "zoned": false, 00:12:26.417 "supported_io_types": { 00:12:26.417 "read": true, 00:12:26.417 "write": true, 00:12:26.417 "unmap": true, 00:12:26.417 "flush": true, 00:12:26.417 "reset": true, 00:12:26.417 "nvme_admin": false, 00:12:26.417 "nvme_io": false, 00:12:26.417 "nvme_io_md": false, 00:12:26.417 "write_zeroes": true, 00:12:26.417 "zcopy": true, 00:12:26.417 "get_zone_info": false, 00:12:26.417 "zone_management": false, 00:12:26.417 "zone_append": false, 00:12:26.417 "compare": false, 00:12:26.417 "compare_and_write": false, 00:12:26.417 "abort": true, 00:12:26.417 "seek_hole": false, 00:12:26.417 "seek_data": false, 00:12:26.417 "copy": true, 00:12:26.417 "nvme_iov_md": false 00:12:26.417 }, 00:12:26.417 "memory_domains": [ 00:12:26.417 { 00:12:26.417 "dma_device_id": "system", 00:12:26.417 "dma_device_type": 1 00:12:26.417 }, 00:12:26.417 { 00:12:26.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.417 "dma_device_type": 2 00:12:26.417 } 00:12:26.417 ], 00:12:26.417 "driver_specific": { 00:12:26.417 "passthru": { 00:12:26.417 "name": "pt1", 00:12:26.417 "base_bdev_name": "malloc1" 00:12:26.417 } 00:12:26.417 } 00:12:26.417 }' 00:12:26.417 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.675 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.933 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.933 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.933 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.933 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:26.933 11:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:27.191 "name": "pt2", 00:12:27.191 "aliases": [ 00:12:27.191 "00000000-0000-0000-0000-000000000002" 00:12:27.191 ], 00:12:27.191 "product_name": "passthru", 00:12:27.191 "block_size": 512, 00:12:27.191 "num_blocks": 65536, 00:12:27.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.191 "assigned_rate_limits": { 00:12:27.191 "rw_ios_per_sec": 0, 00:12:27.191 "rw_mbytes_per_sec": 0, 00:12:27.191 "r_mbytes_per_sec": 0, 00:12:27.191 "w_mbytes_per_sec": 0 00:12:27.191 }, 00:12:27.191 "claimed": true, 00:12:27.191 "claim_type": "exclusive_write", 00:12:27.191 "zoned": false, 00:12:27.191 "supported_io_types": { 00:12:27.191 "read": true, 00:12:27.191 "write": true, 00:12:27.191 "unmap": true, 00:12:27.191 "flush": true, 00:12:27.191 "reset": true, 00:12:27.191 "nvme_admin": false, 00:12:27.191 "nvme_io": false, 00:12:27.191 "nvme_io_md": false, 00:12:27.191 "write_zeroes": true, 00:12:27.191 "zcopy": true, 00:12:27.191 "get_zone_info": false, 00:12:27.191 "zone_management": false, 00:12:27.191 "zone_append": false, 00:12:27.191 "compare": false, 00:12:27.191 "compare_and_write": false, 00:12:27.191 "abort": true, 00:12:27.191 "seek_hole": false, 00:12:27.191 "seek_data": false, 00:12:27.191 "copy": true, 00:12:27.191 "nvme_iov_md": false 00:12:27.191 }, 00:12:27.191 "memory_domains": [ 00:12:27.191 { 00:12:27.191 "dma_device_id": "system", 00:12:27.191 "dma_device_type": 1 00:12:27.191 }, 00:12:27.191 { 00:12:27.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.191 "dma_device_type": 2 00:12:27.191 } 00:12:27.191 ], 00:12:27.191 "driver_specific": { 00:12:27.191 "passthru": { 00:12:27.191 "name": "pt2", 00:12:27.191 "base_bdev_name": "malloc2" 00:12:27.191 } 00:12:27.191 } 00:12:27.191 }' 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:27.191 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:27.448 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:27.448 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:27.448 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:27.448 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:27.449 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:27.449 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:27.706 [2024-07-25 11:54:13.608694] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.706 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 418cb78f-79d1-4bde-a3b3-73ce3b6dd418 '!=' 418cb78f-79d1-4bde-a3b3-73ce3b6dd418 ']' 00:12:27.706 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:27.706 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.706 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:27.706 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:27.963 [2024-07-25 11:54:13.837083] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.964 11:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.221 11:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.221 "name": "raid_bdev1", 00:12:28.221 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:28.221 "strip_size_kb": 0, 00:12:28.221 "state": "online", 00:12:28.221 "raid_level": "raid1", 00:12:28.221 "superblock": true, 00:12:28.221 "num_base_bdevs": 2, 00:12:28.221 "num_base_bdevs_discovered": 1, 00:12:28.221 "num_base_bdevs_operational": 1, 00:12:28.221 "base_bdevs_list": [ 00:12:28.221 { 00:12:28.221 "name": null, 00:12:28.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.221 "is_configured": false, 00:12:28.221 "data_offset": 2048, 00:12:28.221 "data_size": 63488 00:12:28.221 }, 00:12:28.221 { 00:12:28.221 "name": "pt2", 00:12:28.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.221 "is_configured": true, 00:12:28.221 "data_offset": 2048, 00:12:28.221 "data_size": 63488 00:12:28.221 } 00:12:28.221 ] 00:12:28.221 }' 00:12:28.221 11:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.221 11:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.786 11:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:28.786 [2024-07-25 11:54:14.811639] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.786 [2024-07-25 11:54:14.811662] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.786 [2024-07-25 11:54:14.811711] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.786 [2024-07-25 11:54:14.811751] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.786 [2024-07-25 11:54:14.811762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24b2de0 name raid_bdev1, state offline 00:12:28.786 11:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:28.786 11:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.044 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:29.044 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:29.044 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:29.044 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:29.044 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:29.302 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:29.302 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:29.302 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:29.302 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:29.302 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:12:29.302 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.560 [2024-07-25 11:54:15.497406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.560 [2024-07-25 11:54:15.497445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.560 [2024-07-25 11:54:15.497460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24b1f90 00:12:29.560 [2024-07-25 11:54:15.497471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.560 [2024-07-25 11:54:15.498946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.560 [2024-07-25 11:54:15.498972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.560 [2024-07-25 11:54:15.499032] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:29.560 [2024-07-25 11:54:15.499056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.560 [2024-07-25 11:54:15.499131] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2312b40 00:12:29.560 [2024-07-25 11:54:15.499148] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.560 [2024-07-25 11:54:15.499307] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24be810 00:12:29.560 [2024-07-25 11:54:15.499419] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2312b40 00:12:29.560 [2024-07-25 11:54:15.499428] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2312b40 00:12:29.560 [2024-07-25 11:54:15.499520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.560 pt2 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.560 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.818 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.818 "name": "raid_bdev1", 00:12:29.818 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:29.818 "strip_size_kb": 0, 00:12:29.818 "state": "online", 00:12:29.818 "raid_level": "raid1", 00:12:29.818 "superblock": true, 00:12:29.818 "num_base_bdevs": 2, 00:12:29.818 "num_base_bdevs_discovered": 1, 00:12:29.818 "num_base_bdevs_operational": 1, 00:12:29.818 "base_bdevs_list": [ 00:12:29.818 { 00:12:29.818 "name": null, 00:12:29.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.818 "is_configured": false, 00:12:29.818 "data_offset": 2048, 00:12:29.818 "data_size": 63488 00:12:29.818 }, 00:12:29.818 { 00:12:29.818 "name": "pt2", 00:12:29.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.818 "is_configured": true, 00:12:29.818 "data_offset": 2048, 00:12:29.818 "data_size": 63488 00:12:29.818 } 00:12:29.818 ] 00:12:29.818 }' 00:12:29.818 11:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.818 11:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.384 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:30.643 [2024-07-25 11:54:16.512131] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.643 [2024-07-25 11:54:16.512159] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.643 [2024-07-25 11:54:16.512208] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.643 [2024-07-25 11:54:16.512249] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.643 [2024-07-25 11:54:16.512260] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2312b40 name raid_bdev1, state offline 00:12:30.643 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.643 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:30.901 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:30.901 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:30.901 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.902 [2024-07-25 11:54:16.969318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.902 [2024-07-25 11:54:16.969361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.902 [2024-07-25 11:54:16.969378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24b48d0 00:12:30.902 [2024-07-25 11:54:16.969389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.902 [2024-07-25 11:54:16.970869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.902 [2024-07-25 11:54:16.970900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.902 [2024-07-25 11:54:16.970961] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:30.902 [2024-07-25 11:54:16.970984] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.902 [2024-07-25 11:54:16.971077] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:30.902 [2024-07-25 11:54:16.971089] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.902 [2024-07-25 11:54:16.971101] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2313690 name raid_bdev1, state configuring 00:12:30.902 [2024-07-25 11:54:16.971122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.902 [2024-07-25 11:54:16.971182] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x23121e0 00:12:30.902 [2024-07-25 11:54:16.971192] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.902 [2024-07-25 11:54:16.971341] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x231a990 00:12:30.902 [2024-07-25 11:54:16.971454] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23121e0 00:12:30.902 [2024-07-25 11:54:16.971463] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23121e0 00:12:30.902 [2024-07-25 11:54:16.971553] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.902 pt1 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.902 11:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.161 11:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.161 "name": "raid_bdev1", 00:12:31.161 "uuid": "418cb78f-79d1-4bde-a3b3-73ce3b6dd418", 00:12:31.161 "strip_size_kb": 0, 00:12:31.161 "state": "online", 00:12:31.161 "raid_level": "raid1", 00:12:31.161 "superblock": true, 00:12:31.161 "num_base_bdevs": 2, 00:12:31.161 "num_base_bdevs_discovered": 1, 00:12:31.161 "num_base_bdevs_operational": 1, 00:12:31.161 "base_bdevs_list": [ 00:12:31.161 { 00:12:31.161 "name": null, 00:12:31.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.161 "is_configured": false, 00:12:31.161 "data_offset": 2048, 00:12:31.161 "data_size": 63488 00:12:31.161 }, 00:12:31.161 { 00:12:31.161 "name": "pt2", 00:12:31.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.161 "is_configured": true, 00:12:31.161 "data_offset": 2048, 00:12:31.161 "data_size": 63488 00:12:31.161 } 00:12:31.161 ] 00:12:31.161 }' 00:12:31.161 11:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.161 11:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.728 11:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:31.728 11:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:31.987 11:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:31.987 11:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:31.987 11:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:32.246 [2024-07-25 11:54:18.240856] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 418cb78f-79d1-4bde-a3b3-73ce3b6dd418 '!=' 418cb78f-79d1-4bde-a3b3-73ce3b6dd418 ']' 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4112526 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4112526 ']' 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4112526 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4112526 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4112526' 00:12:32.246 killing process with pid 4112526 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4112526 00:12:32.246 [2024-07-25 11:54:18.310227] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.246 [2024-07-25 11:54:18.310275] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.246 [2024-07-25 11:54:18.310316] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.246 [2024-07-25 11:54:18.310326] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23121e0 name raid_bdev1, state offline 00:12:32.246 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4112526 00:12:32.246 [2024-07-25 11:54:18.325983] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.506 11:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:32.506 00:12:32.506 real 0m14.717s 00:12:32.506 user 0m26.631s 00:12:32.506 sys 0m2.796s 00:12:32.506 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.506 11:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.506 ************************************ 00:12:32.506 END TEST raid_superblock_test 00:12:32.506 ************************************ 00:12:32.506 11:54:18 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:32.506 11:54:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:32.506 11:54:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.506 11:54:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.506 ************************************ 00:12:32.506 START TEST raid_read_error_test 00:12:32.506 ************************************ 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.HvGQ1jbtBJ 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4115662 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4115662 /var/tmp/spdk-raid.sock 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4115662 ']' 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:32.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.506 11:54:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.766 [2024-07-25 11:54:18.672753] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:32.766 [2024-07-25 11:54:18.672809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115662 ] 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:32.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.766 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:32.766 [2024-07-25 11:54:18.804731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.025 [2024-07-25 11:54:18.890825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.025 [2024-07-25 11:54:18.954787] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.025 [2024-07-25 11:54:18.954820] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.640 11:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.640 11:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:33.640 11:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:33.640 11:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.899 BaseBdev1_malloc 00:12:33.899 11:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:33.899 true 00:12:33.899 11:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:34.158 [2024-07-25 11:54:20.103627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:34.158 [2024-07-25 11:54:20.103674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.158 [2024-07-25 11:54:20.103693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec2190 00:12:34.158 [2024-07-25 11:54:20.103710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.158 [2024-07-25 11:54:20.105357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.158 [2024-07-25 11:54:20.105385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.158 BaseBdev1 00:12:34.158 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:34.158 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:34.417 BaseBdev2_malloc 00:12:34.417 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:34.417 true 00:12:34.417 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:34.676 [2024-07-25 11:54:20.733677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:34.676 [2024-07-25 11:54:20.733721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.676 [2024-07-25 11:54:20.733739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec6e20 00:12:34.676 [2024-07-25 11:54:20.733751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.676 [2024-07-25 11:54:20.735155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.676 [2024-07-25 11:54:20.735183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.676 BaseBdev2 00:12:34.676 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:34.935 [2024-07-25 11:54:20.958293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.935 [2024-07-25 11:54:20.959474] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.935 [2024-07-25 11:54:20.959656] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ec8a50 00:12:34.935 [2024-07-25 11:54:20.959669] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.935 [2024-07-25 11:54:20.959854] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d1d140 00:12:34.935 [2024-07-25 11:54:20.960005] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ec8a50 00:12:34.935 [2024-07-25 11:54:20.960015] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ec8a50 00:12:34.935 [2024-07-25 11:54:20.960113] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.935 11:54:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.194 11:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.194 "name": "raid_bdev1", 00:12:35.194 "uuid": "b8e9bc66-7e71-4972-a470-1c1c9391d70b", 00:12:35.194 "strip_size_kb": 0, 00:12:35.194 "state": "online", 00:12:35.194 "raid_level": "raid1", 00:12:35.194 "superblock": true, 00:12:35.194 "num_base_bdevs": 2, 00:12:35.194 "num_base_bdevs_discovered": 2, 00:12:35.194 "num_base_bdevs_operational": 2, 00:12:35.194 "base_bdevs_list": [ 00:12:35.194 { 00:12:35.194 "name": "BaseBdev1", 00:12:35.194 "uuid": "9aa1d360-33b2-5daa-8c33-1bbdf07970a1", 00:12:35.194 "is_configured": true, 00:12:35.194 "data_offset": 2048, 00:12:35.194 "data_size": 63488 00:12:35.194 }, 00:12:35.194 { 00:12:35.194 "name": "BaseBdev2", 00:12:35.194 "uuid": "ef0922bc-885e-5b75-b95c-8f0a05a89eac", 00:12:35.194 "is_configured": true, 00:12:35.194 "data_offset": 2048, 00:12:35.194 "data_size": 63488 00:12:35.194 } 00:12:35.194 ] 00:12:35.194 }' 00:12:35.194 11:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.194 11:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 11:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:35.761 11:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:35.761 [2024-07-25 11:54:21.864957] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ec39d0 00:12:36.697 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.957 11:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.216 11:54:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.216 "name": "raid_bdev1", 00:12:37.216 "uuid": "b8e9bc66-7e71-4972-a470-1c1c9391d70b", 00:12:37.216 "strip_size_kb": 0, 00:12:37.216 "state": "online", 00:12:37.216 "raid_level": "raid1", 00:12:37.216 "superblock": true, 00:12:37.216 "num_base_bdevs": 2, 00:12:37.216 "num_base_bdevs_discovered": 2, 00:12:37.216 "num_base_bdevs_operational": 2, 00:12:37.216 "base_bdevs_list": [ 00:12:37.216 { 00:12:37.216 "name": "BaseBdev1", 00:12:37.216 "uuid": "9aa1d360-33b2-5daa-8c33-1bbdf07970a1", 00:12:37.216 "is_configured": true, 00:12:37.216 "data_offset": 2048, 00:12:37.216 "data_size": 63488 00:12:37.216 }, 00:12:37.216 { 00:12:37.216 "name": "BaseBdev2", 00:12:37.216 "uuid": "ef0922bc-885e-5b75-b95c-8f0a05a89eac", 00:12:37.216 "is_configured": true, 00:12:37.216 "data_offset": 2048, 00:12:37.216 "data_size": 63488 00:12:37.216 } 00:12:37.216 ] 00:12:37.216 }' 00:12:37.216 11:54:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.216 11:54:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.781 11:54:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:38.040 [2024-07-25 11:54:23.961058] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.040 [2024-07-25 11:54:23.961096] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.040 [2024-07-25 11:54:23.964018] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.040 [2024-07-25 11:54:23.964048] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.040 [2024-07-25 11:54:23.964118] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.040 [2024-07-25 11:54:23.964129] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec8a50 name raid_bdev1, state offline 00:12:38.040 0 00:12:38.040 11:54:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4115662 00:12:38.040 11:54:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4115662 ']' 00:12:38.040 11:54:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4115662 00:12:38.040 11:54:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:38.040 11:54:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.040 11:54:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4115662 00:12:38.040 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.040 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.040 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4115662' 00:12:38.040 killing process with pid 4115662 00:12:38.040 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4115662 00:12:38.040 [2024-07-25 11:54:24.037428] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.040 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4115662 00:12:38.040 [2024-07-25 11:54:24.047337] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.HvGQ1jbtBJ 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:38.299 00:12:38.299 real 0m5.660s 00:12:38.299 user 0m8.674s 00:12:38.299 sys 0m1.029s 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.299 11:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.299 ************************************ 00:12:38.299 END TEST raid_read_error_test 00:12:38.299 ************************************ 00:12:38.299 11:54:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:38.299 11:54:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:38.299 11:54:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.299 11:54:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.299 ************************************ 00:12:38.299 START TEST raid_write_error_test 00:12:38.299 ************************************ 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:38.299 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GAY6ujrAVg 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4116810 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4116810 /var/tmp/spdk-raid.sock 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4116810 ']' 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:38.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.300 11:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.300 [2024-07-25 11:54:24.413209] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:38.300 [2024-07-25 11:54:24.413265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4116810 ] 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:38.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.559 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:38.559 [2024-07-25 11:54:24.546314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.559 [2024-07-25 11:54:24.630893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.818 [2024-07-25 11:54:24.701621] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.818 [2024-07-25 11:54:24.701659] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.439 11:54:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.439 11:54:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:39.439 11:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:39.439 11:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:39.439 BaseBdev1_malloc 00:12:39.439 11:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:39.698 true 00:12:39.698 11:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:39.957 [2024-07-25 11:54:25.986994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:39.957 [2024-07-25 11:54:25.987037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.957 [2024-07-25 11:54:25.987054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11d3190 00:12:39.957 [2024-07-25 11:54:25.987065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.957 [2024-07-25 11:54:25.988551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.957 [2024-07-25 11:54:25.988579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.957 BaseBdev1 00:12:39.957 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:39.957 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.216 BaseBdev2_malloc 00:12:40.216 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:40.474 true 00:12:40.475 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:40.733 [2024-07-25 11:54:26.652968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:40.733 [2024-07-25 11:54:26.653007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.733 [2024-07-25 11:54:26.653024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11d7e20 00:12:40.733 [2024-07-25 11:54:26.653036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.733 [2024-07-25 11:54:26.654384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.733 [2024-07-25 11:54:26.654413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.733 BaseBdev2 00:12:40.733 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:40.992 [2024-07-25 11:54:26.881591] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.992 [2024-07-25 11:54:26.882693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.992 [2024-07-25 11:54:26.882868] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11d9a50 00:12:40.992 [2024-07-25 11:54:26.882880] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.992 [2024-07-25 11:54:26.883045] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x102e140 00:12:40.992 [2024-07-25 11:54:26.883199] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11d9a50 00:12:40.992 [2024-07-25 11:54:26.883209] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11d9a50 00:12:40.992 [2024-07-25 11:54:26.883302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.992 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.993 11:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.252 11:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.252 "name": "raid_bdev1", 00:12:41.252 "uuid": "1c94135c-0799-4f55-9896-615cc118c33a", 00:12:41.252 "strip_size_kb": 0, 00:12:41.252 "state": "online", 00:12:41.252 "raid_level": "raid1", 00:12:41.252 "superblock": true, 00:12:41.252 "num_base_bdevs": 2, 00:12:41.252 "num_base_bdevs_discovered": 2, 00:12:41.252 "num_base_bdevs_operational": 2, 00:12:41.252 "base_bdevs_list": [ 00:12:41.252 { 00:12:41.252 "name": "BaseBdev1", 00:12:41.252 "uuid": "a9509f78-762d-5b0e-8209-90e00b6d5b2a", 00:12:41.252 "is_configured": true, 00:12:41.252 "data_offset": 2048, 00:12:41.252 "data_size": 63488 00:12:41.252 }, 00:12:41.252 { 00:12:41.252 "name": "BaseBdev2", 00:12:41.252 "uuid": "2c268f6b-dec5-5157-a7d8-1257bd62bb93", 00:12:41.252 "is_configured": true, 00:12:41.252 "data_offset": 2048, 00:12:41.252 "data_size": 63488 00:12:41.252 } 00:12:41.252 ] 00:12:41.252 }' 00:12:41.252 11:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.252 11:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.821 11:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:41.821 11:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:41.821 [2024-07-25 11:54:27.784225] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11d49d0 00:12:42.757 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:43.016 [2024-07-25 11:54:28.902621] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:43.016 [2024-07-25 11:54:28.902676] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.016 [2024-07-25 11:54:28.902850] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x11d49d0 00:12:43.016 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:43.016 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:43.016 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:12:43.016 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:12:43.016 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.017 11:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.276 11:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.276 "name": "raid_bdev1", 00:12:43.276 "uuid": "1c94135c-0799-4f55-9896-615cc118c33a", 00:12:43.276 "strip_size_kb": 0, 00:12:43.276 "state": "online", 00:12:43.276 "raid_level": "raid1", 00:12:43.276 "superblock": true, 00:12:43.276 "num_base_bdevs": 2, 00:12:43.276 "num_base_bdevs_discovered": 1, 00:12:43.276 "num_base_bdevs_operational": 1, 00:12:43.276 "base_bdevs_list": [ 00:12:43.276 { 00:12:43.276 "name": null, 00:12:43.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.276 "is_configured": false, 00:12:43.276 "data_offset": 2048, 00:12:43.276 "data_size": 63488 00:12:43.276 }, 00:12:43.276 { 00:12:43.276 "name": "BaseBdev2", 00:12:43.276 "uuid": "2c268f6b-dec5-5157-a7d8-1257bd62bb93", 00:12:43.276 "is_configured": true, 00:12:43.276 "data_offset": 2048, 00:12:43.276 "data_size": 63488 00:12:43.276 } 00:12:43.276 ] 00:12:43.276 }' 00:12:43.276 11:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.276 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:43.845 [2024-07-25 11:54:29.921515] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.845 [2024-07-25 11:54:29.921549] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.845 [2024-07-25 11:54:29.924426] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.845 [2024-07-25 11:54:29.924451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.845 [2024-07-25 11:54:29.924501] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.845 [2024-07-25 11:54:29.924512] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11d9a50 name raid_bdev1, state offline 00:12:43.845 0 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4116810 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4116810 ']' 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4116810 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.845 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4116810 00:12:44.104 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.104 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.104 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4116810' 00:12:44.104 killing process with pid 4116810 00:12:44.104 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4116810 00:12:44.104 [2024-07-25 11:54:29.996233] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.104 11:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4116810 00:12:44.104 [2024-07-25 11:54:30.005767] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GAY6ujrAVg 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:44.104 00:12:44.104 real 0m5.873s 00:12:44.104 user 0m9.069s 00:12:44.104 sys 0m1.056s 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.104 11:54:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.104 ************************************ 00:12:44.104 END TEST raid_write_error_test 00:12:44.104 ************************************ 00:12:44.363 11:54:30 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:44.363 11:54:30 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:44.363 11:54:30 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:44.363 11:54:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:44.364 11:54:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.364 11:54:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.364 ************************************ 00:12:44.364 START TEST raid_state_function_test 00:12:44.364 ************************************ 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4117962 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4117962' 00:12:44.364 Process raid pid: 4117962 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4117962 /var/tmp/spdk-raid.sock 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4117962 ']' 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:44.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.364 11:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.364 [2024-07-25 11:54:30.357828] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:44.364 [2024-07-25 11:54:30.357883] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:44.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:44.364 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:44.623 [2024-07-25 11:54:30.491653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.624 [2024-07-25 11:54:30.579946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.624 [2024-07-25 11:54:30.645901] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.624 [2024-07-25 11:54:30.645935] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.191 11:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.191 11:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:45.191 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:45.450 [2024-07-25 11:54:31.404896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:45.450 [2024-07-25 11:54:31.404935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:45.450 [2024-07-25 11:54:31.404945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.450 [2024-07-25 11:54:31.404956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.450 [2024-07-25 11:54:31.404964] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.450 [2024-07-25 11:54:31.404974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.450 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.708 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.709 "name": "Existed_Raid", 00:12:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.709 "strip_size_kb": 64, 00:12:45.709 "state": "configuring", 00:12:45.709 "raid_level": "raid0", 00:12:45.709 "superblock": false, 00:12:45.709 "num_base_bdevs": 3, 00:12:45.709 "num_base_bdevs_discovered": 0, 00:12:45.709 "num_base_bdevs_operational": 3, 00:12:45.709 "base_bdevs_list": [ 00:12:45.709 { 00:12:45.709 "name": "BaseBdev1", 00:12:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.709 "is_configured": false, 00:12:45.709 "data_offset": 0, 00:12:45.709 "data_size": 0 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "name": "BaseBdev2", 00:12:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.709 "is_configured": false, 00:12:45.709 "data_offset": 0, 00:12:45.709 "data_size": 0 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "name": "BaseBdev3", 00:12:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.709 "is_configured": false, 00:12:45.709 "data_offset": 0, 00:12:45.709 "data_size": 0 00:12:45.709 } 00:12:45.709 ] 00:12:45.709 }' 00:12:45.709 11:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.709 11:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.276 11:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:46.535 [2024-07-25 11:54:32.439482] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.535 [2024-07-25 11:54:32.439511] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1119f40 name Existed_Raid, state configuring 00:12:46.535 11:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:46.795 [2024-07-25 11:54:32.668094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.795 [2024-07-25 11:54:32.668120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.795 [2024-07-25 11:54:32.668129] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.795 [2024-07-25 11:54:32.668146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.795 [2024-07-25 11:54:32.668154] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.795 [2024-07-25 11:54:32.668164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.795 11:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:46.795 [2024-07-25 11:54:32.902417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.795 BaseBdev1 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.054 11:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:47.054 11:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.336 [ 00:12:47.336 { 00:12:47.336 "name": "BaseBdev1", 00:12:47.336 "aliases": [ 00:12:47.336 "06bbcc28-c445-4654-943f-435f2ca2733c" 00:12:47.336 ], 00:12:47.336 "product_name": "Malloc disk", 00:12:47.336 "block_size": 512, 00:12:47.336 "num_blocks": 65536, 00:12:47.336 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:47.336 "assigned_rate_limits": { 00:12:47.336 "rw_ios_per_sec": 0, 00:12:47.336 "rw_mbytes_per_sec": 0, 00:12:47.336 "r_mbytes_per_sec": 0, 00:12:47.336 "w_mbytes_per_sec": 0 00:12:47.336 }, 00:12:47.336 "claimed": true, 00:12:47.336 "claim_type": "exclusive_write", 00:12:47.336 "zoned": false, 00:12:47.336 "supported_io_types": { 00:12:47.336 "read": true, 00:12:47.336 "write": true, 00:12:47.336 "unmap": true, 00:12:47.336 "flush": true, 00:12:47.336 "reset": true, 00:12:47.336 "nvme_admin": false, 00:12:47.336 "nvme_io": false, 00:12:47.336 "nvme_io_md": false, 00:12:47.336 "write_zeroes": true, 00:12:47.336 "zcopy": true, 00:12:47.336 "get_zone_info": false, 00:12:47.336 "zone_management": false, 00:12:47.336 "zone_append": false, 00:12:47.336 "compare": false, 00:12:47.336 "compare_and_write": false, 00:12:47.336 "abort": true, 00:12:47.336 "seek_hole": false, 00:12:47.336 "seek_data": false, 00:12:47.336 "copy": true, 00:12:47.336 "nvme_iov_md": false 00:12:47.336 }, 00:12:47.336 "memory_domains": [ 00:12:47.336 { 00:12:47.336 "dma_device_id": "system", 00:12:47.336 "dma_device_type": 1 00:12:47.336 }, 00:12:47.336 { 00:12:47.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.336 "dma_device_type": 2 00:12:47.336 } 00:12:47.336 ], 00:12:47.336 "driver_specific": {} 00:12:47.336 } 00:12:47.336 ] 00:12:47.336 11:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.337 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.653 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.653 "name": "Existed_Raid", 00:12:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.653 "strip_size_kb": 64, 00:12:47.653 "state": "configuring", 00:12:47.653 "raid_level": "raid0", 00:12:47.653 "superblock": false, 00:12:47.653 "num_base_bdevs": 3, 00:12:47.653 "num_base_bdevs_discovered": 1, 00:12:47.653 "num_base_bdevs_operational": 3, 00:12:47.653 "base_bdevs_list": [ 00:12:47.653 { 00:12:47.653 "name": "BaseBdev1", 00:12:47.653 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:47.653 "is_configured": true, 00:12:47.653 "data_offset": 0, 00:12:47.653 "data_size": 65536 00:12:47.653 }, 00:12:47.653 { 00:12:47.653 "name": "BaseBdev2", 00:12:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.653 "is_configured": false, 00:12:47.653 "data_offset": 0, 00:12:47.653 "data_size": 0 00:12:47.653 }, 00:12:47.653 { 00:12:47.653 "name": "BaseBdev3", 00:12:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.653 "is_configured": false, 00:12:47.653 "data_offset": 0, 00:12:47.653 "data_size": 0 00:12:47.653 } 00:12:47.653 ] 00:12:47.653 }' 00:12:47.653 11:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.653 11:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.221 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:48.221 [2024-07-25 11:54:34.334180] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.222 [2024-07-25 11:54:34.334219] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1119810 name Existed_Raid, state configuring 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:48.481 [2024-07-25 11:54:34.562818] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.481 [2024-07-25 11:54:34.564210] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.481 [2024-07-25 11:54:34.564242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.481 [2024-07-25 11:54:34.564252] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.481 [2024-07-25 11:54:34.564262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.481 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.740 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.740 "name": "Existed_Raid", 00:12:48.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.740 "strip_size_kb": 64, 00:12:48.740 "state": "configuring", 00:12:48.740 "raid_level": "raid0", 00:12:48.740 "superblock": false, 00:12:48.740 "num_base_bdevs": 3, 00:12:48.740 "num_base_bdevs_discovered": 1, 00:12:48.740 "num_base_bdevs_operational": 3, 00:12:48.740 "base_bdevs_list": [ 00:12:48.740 { 00:12:48.740 "name": "BaseBdev1", 00:12:48.740 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:48.740 "is_configured": true, 00:12:48.740 "data_offset": 0, 00:12:48.740 "data_size": 65536 00:12:48.740 }, 00:12:48.740 { 00:12:48.740 "name": "BaseBdev2", 00:12:48.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.740 "is_configured": false, 00:12:48.740 "data_offset": 0, 00:12:48.740 "data_size": 0 00:12:48.740 }, 00:12:48.740 { 00:12:48.740 "name": "BaseBdev3", 00:12:48.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.740 "is_configured": false, 00:12:48.740 "data_offset": 0, 00:12:48.740 "data_size": 0 00:12:48.740 } 00:12:48.740 ] 00:12:48.740 }' 00:12:48.740 11:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.740 11:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.308 11:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.567 [2024-07-25 11:54:35.604783] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.567 BaseBdev2 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:49.567 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.826 11:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.085 [ 00:12:50.085 { 00:12:50.085 "name": "BaseBdev2", 00:12:50.085 "aliases": [ 00:12:50.085 "1164d419-9eeb-42fa-97b8-33c2dd696005" 00:12:50.085 ], 00:12:50.085 "product_name": "Malloc disk", 00:12:50.085 "block_size": 512, 00:12:50.085 "num_blocks": 65536, 00:12:50.085 "uuid": "1164d419-9eeb-42fa-97b8-33c2dd696005", 00:12:50.085 "assigned_rate_limits": { 00:12:50.085 "rw_ios_per_sec": 0, 00:12:50.085 "rw_mbytes_per_sec": 0, 00:12:50.085 "r_mbytes_per_sec": 0, 00:12:50.085 "w_mbytes_per_sec": 0 00:12:50.085 }, 00:12:50.085 "claimed": true, 00:12:50.085 "claim_type": "exclusive_write", 00:12:50.085 "zoned": false, 00:12:50.085 "supported_io_types": { 00:12:50.085 "read": true, 00:12:50.085 "write": true, 00:12:50.085 "unmap": true, 00:12:50.085 "flush": true, 00:12:50.085 "reset": true, 00:12:50.085 "nvme_admin": false, 00:12:50.085 "nvme_io": false, 00:12:50.085 "nvme_io_md": false, 00:12:50.085 "write_zeroes": true, 00:12:50.085 "zcopy": true, 00:12:50.085 "get_zone_info": false, 00:12:50.085 "zone_management": false, 00:12:50.085 "zone_append": false, 00:12:50.085 "compare": false, 00:12:50.085 "compare_and_write": false, 00:12:50.085 "abort": true, 00:12:50.085 "seek_hole": false, 00:12:50.085 "seek_data": false, 00:12:50.085 "copy": true, 00:12:50.085 "nvme_iov_md": false 00:12:50.085 }, 00:12:50.085 "memory_domains": [ 00:12:50.085 { 00:12:50.085 "dma_device_id": "system", 00:12:50.085 "dma_device_type": 1 00:12:50.085 }, 00:12:50.085 { 00:12:50.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.085 "dma_device_type": 2 00:12:50.085 } 00:12:50.085 ], 00:12:50.085 "driver_specific": {} 00:12:50.085 } 00:12:50.085 ] 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.085 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.344 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.344 "name": "Existed_Raid", 00:12:50.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.344 "strip_size_kb": 64, 00:12:50.344 "state": "configuring", 00:12:50.344 "raid_level": "raid0", 00:12:50.344 "superblock": false, 00:12:50.344 "num_base_bdevs": 3, 00:12:50.344 "num_base_bdevs_discovered": 2, 00:12:50.344 "num_base_bdevs_operational": 3, 00:12:50.344 "base_bdevs_list": [ 00:12:50.344 { 00:12:50.344 "name": "BaseBdev1", 00:12:50.344 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:50.344 "is_configured": true, 00:12:50.344 "data_offset": 0, 00:12:50.344 "data_size": 65536 00:12:50.344 }, 00:12:50.344 { 00:12:50.344 "name": "BaseBdev2", 00:12:50.344 "uuid": "1164d419-9eeb-42fa-97b8-33c2dd696005", 00:12:50.344 "is_configured": true, 00:12:50.344 "data_offset": 0, 00:12:50.344 "data_size": 65536 00:12:50.344 }, 00:12:50.344 { 00:12:50.344 "name": "BaseBdev3", 00:12:50.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.344 "is_configured": false, 00:12:50.344 "data_offset": 0, 00:12:50.344 "data_size": 0 00:12:50.344 } 00:12:50.344 ] 00:12:50.344 }' 00:12:50.344 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.344 11:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.912 11:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.170 [2024-07-25 11:54:37.051826] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.170 [2024-07-25 11:54:37.051866] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x111a700 00:12:51.170 [2024-07-25 11:54:37.051874] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:51.170 [2024-07-25 11:54:37.052050] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x111a3d0 00:12:51.170 [2024-07-25 11:54:37.052176] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x111a700 00:12:51.170 [2024-07-25 11:54:37.052186] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x111a700 00:12:51.170 [2024-07-25 11:54:37.052345] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.170 BaseBdev3 00:12:51.170 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:51.170 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:51.170 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.170 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:51.171 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.171 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.171 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:51.429 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.429 [ 00:12:51.429 { 00:12:51.429 "name": "BaseBdev3", 00:12:51.429 "aliases": [ 00:12:51.429 "50299eb3-a4c6-408d-851d-9d41d3c42917" 00:12:51.429 ], 00:12:51.429 "product_name": "Malloc disk", 00:12:51.429 "block_size": 512, 00:12:51.429 "num_blocks": 65536, 00:12:51.429 "uuid": "50299eb3-a4c6-408d-851d-9d41d3c42917", 00:12:51.429 "assigned_rate_limits": { 00:12:51.429 "rw_ios_per_sec": 0, 00:12:51.429 "rw_mbytes_per_sec": 0, 00:12:51.429 "r_mbytes_per_sec": 0, 00:12:51.429 "w_mbytes_per_sec": 0 00:12:51.429 }, 00:12:51.429 "claimed": true, 00:12:51.429 "claim_type": "exclusive_write", 00:12:51.429 "zoned": false, 00:12:51.429 "supported_io_types": { 00:12:51.429 "read": true, 00:12:51.429 "write": true, 00:12:51.429 "unmap": true, 00:12:51.429 "flush": true, 00:12:51.429 "reset": true, 00:12:51.429 "nvme_admin": false, 00:12:51.429 "nvme_io": false, 00:12:51.430 "nvme_io_md": false, 00:12:51.430 "write_zeroes": true, 00:12:51.430 "zcopy": true, 00:12:51.430 "get_zone_info": false, 00:12:51.430 "zone_management": false, 00:12:51.430 "zone_append": false, 00:12:51.430 "compare": false, 00:12:51.430 "compare_and_write": false, 00:12:51.430 "abort": true, 00:12:51.430 "seek_hole": false, 00:12:51.430 "seek_data": false, 00:12:51.430 "copy": true, 00:12:51.430 "nvme_iov_md": false 00:12:51.430 }, 00:12:51.430 "memory_domains": [ 00:12:51.430 { 00:12:51.430 "dma_device_id": "system", 00:12:51.430 "dma_device_type": 1 00:12:51.430 }, 00:12:51.430 { 00:12:51.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.430 "dma_device_type": 2 00:12:51.430 } 00:12:51.430 ], 00:12:51.430 "driver_specific": {} 00:12:51.430 } 00:12:51.430 ] 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.430 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.689 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.689 "name": "Existed_Raid", 00:12:51.689 "uuid": "7f721f22-b7e0-4462-bc77-13943041008f", 00:12:51.689 "strip_size_kb": 64, 00:12:51.689 "state": "online", 00:12:51.689 "raid_level": "raid0", 00:12:51.689 "superblock": false, 00:12:51.689 "num_base_bdevs": 3, 00:12:51.689 "num_base_bdevs_discovered": 3, 00:12:51.689 "num_base_bdevs_operational": 3, 00:12:51.689 "base_bdevs_list": [ 00:12:51.689 { 00:12:51.689 "name": "BaseBdev1", 00:12:51.689 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:51.689 "is_configured": true, 00:12:51.689 "data_offset": 0, 00:12:51.689 "data_size": 65536 00:12:51.689 }, 00:12:51.689 { 00:12:51.689 "name": "BaseBdev2", 00:12:51.689 "uuid": "1164d419-9eeb-42fa-97b8-33c2dd696005", 00:12:51.689 "is_configured": true, 00:12:51.689 "data_offset": 0, 00:12:51.689 "data_size": 65536 00:12:51.689 }, 00:12:51.689 { 00:12:51.689 "name": "BaseBdev3", 00:12:51.689 "uuid": "50299eb3-a4c6-408d-851d-9d41d3c42917", 00:12:51.689 "is_configured": true, 00:12:51.689 "data_offset": 0, 00:12:51.689 "data_size": 65536 00:12:51.689 } 00:12:51.689 ] 00:12:51.689 }' 00:12:51.689 11:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.689 11:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:52.256 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:52.516 [2024-07-25 11:54:38.540076] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.516 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:52.516 "name": "Existed_Raid", 00:12:52.516 "aliases": [ 00:12:52.516 "7f721f22-b7e0-4462-bc77-13943041008f" 00:12:52.516 ], 00:12:52.516 "product_name": "Raid Volume", 00:12:52.516 "block_size": 512, 00:12:52.516 "num_blocks": 196608, 00:12:52.516 "uuid": "7f721f22-b7e0-4462-bc77-13943041008f", 00:12:52.516 "assigned_rate_limits": { 00:12:52.516 "rw_ios_per_sec": 0, 00:12:52.516 "rw_mbytes_per_sec": 0, 00:12:52.516 "r_mbytes_per_sec": 0, 00:12:52.516 "w_mbytes_per_sec": 0 00:12:52.516 }, 00:12:52.516 "claimed": false, 00:12:52.516 "zoned": false, 00:12:52.516 "supported_io_types": { 00:12:52.516 "read": true, 00:12:52.516 "write": true, 00:12:52.516 "unmap": true, 00:12:52.516 "flush": true, 00:12:52.516 "reset": true, 00:12:52.516 "nvme_admin": false, 00:12:52.516 "nvme_io": false, 00:12:52.516 "nvme_io_md": false, 00:12:52.516 "write_zeroes": true, 00:12:52.516 "zcopy": false, 00:12:52.516 "get_zone_info": false, 00:12:52.516 "zone_management": false, 00:12:52.516 "zone_append": false, 00:12:52.516 "compare": false, 00:12:52.516 "compare_and_write": false, 00:12:52.516 "abort": false, 00:12:52.516 "seek_hole": false, 00:12:52.516 "seek_data": false, 00:12:52.516 "copy": false, 00:12:52.516 "nvme_iov_md": false 00:12:52.516 }, 00:12:52.516 "memory_domains": [ 00:12:52.516 { 00:12:52.516 "dma_device_id": "system", 00:12:52.516 "dma_device_type": 1 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.516 "dma_device_type": 2 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "dma_device_id": "system", 00:12:52.516 "dma_device_type": 1 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.516 "dma_device_type": 2 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "dma_device_id": "system", 00:12:52.516 "dma_device_type": 1 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.516 "dma_device_type": 2 00:12:52.516 } 00:12:52.516 ], 00:12:52.516 "driver_specific": { 00:12:52.516 "raid": { 00:12:52.516 "uuid": "7f721f22-b7e0-4462-bc77-13943041008f", 00:12:52.516 "strip_size_kb": 64, 00:12:52.516 "state": "online", 00:12:52.516 "raid_level": "raid0", 00:12:52.516 "superblock": false, 00:12:52.516 "num_base_bdevs": 3, 00:12:52.516 "num_base_bdevs_discovered": 3, 00:12:52.516 "num_base_bdevs_operational": 3, 00:12:52.516 "base_bdevs_list": [ 00:12:52.516 { 00:12:52.516 "name": "BaseBdev1", 00:12:52.516 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:52.516 "is_configured": true, 00:12:52.516 "data_offset": 0, 00:12:52.516 "data_size": 65536 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "name": "BaseBdev2", 00:12:52.516 "uuid": "1164d419-9eeb-42fa-97b8-33c2dd696005", 00:12:52.516 "is_configured": true, 00:12:52.516 "data_offset": 0, 00:12:52.516 "data_size": 65536 00:12:52.516 }, 00:12:52.516 { 00:12:52.516 "name": "BaseBdev3", 00:12:52.516 "uuid": "50299eb3-a4c6-408d-851d-9d41d3c42917", 00:12:52.516 "is_configured": true, 00:12:52.516 "data_offset": 0, 00:12:52.516 "data_size": 65536 00:12:52.516 } 00:12:52.516 ] 00:12:52.516 } 00:12:52.516 } 00:12:52.516 }' 00:12:52.516 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.516 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:52.516 BaseBdev2 00:12:52.516 BaseBdev3' 00:12:52.516 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:52.516 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:52.516 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:52.775 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:52.775 "name": "BaseBdev1", 00:12:52.775 "aliases": [ 00:12:52.775 "06bbcc28-c445-4654-943f-435f2ca2733c" 00:12:52.775 ], 00:12:52.775 "product_name": "Malloc disk", 00:12:52.775 "block_size": 512, 00:12:52.775 "num_blocks": 65536, 00:12:52.775 "uuid": "06bbcc28-c445-4654-943f-435f2ca2733c", 00:12:52.775 "assigned_rate_limits": { 00:12:52.775 "rw_ios_per_sec": 0, 00:12:52.775 "rw_mbytes_per_sec": 0, 00:12:52.775 "r_mbytes_per_sec": 0, 00:12:52.775 "w_mbytes_per_sec": 0 00:12:52.775 }, 00:12:52.775 "claimed": true, 00:12:52.775 "claim_type": "exclusive_write", 00:12:52.775 "zoned": false, 00:12:52.775 "supported_io_types": { 00:12:52.775 "read": true, 00:12:52.775 "write": true, 00:12:52.775 "unmap": true, 00:12:52.775 "flush": true, 00:12:52.775 "reset": true, 00:12:52.775 "nvme_admin": false, 00:12:52.775 "nvme_io": false, 00:12:52.775 "nvme_io_md": false, 00:12:52.775 "write_zeroes": true, 00:12:52.775 "zcopy": true, 00:12:52.775 "get_zone_info": false, 00:12:52.775 "zone_management": false, 00:12:52.775 "zone_append": false, 00:12:52.775 "compare": false, 00:12:52.775 "compare_and_write": false, 00:12:52.775 "abort": true, 00:12:52.775 "seek_hole": false, 00:12:52.775 "seek_data": false, 00:12:52.775 "copy": true, 00:12:52.775 "nvme_iov_md": false 00:12:52.775 }, 00:12:52.775 "memory_domains": [ 00:12:52.775 { 00:12:52.775 "dma_device_id": "system", 00:12:52.775 "dma_device_type": 1 00:12:52.775 }, 00:12:52.775 { 00:12:52.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.775 "dma_device_type": 2 00:12:52.775 } 00:12:52.775 ], 00:12:52.775 "driver_specific": {} 00:12:52.775 }' 00:12:52.775 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:52.775 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.035 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.035 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.035 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.035 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.035 11:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:53.035 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.294 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.294 "name": "BaseBdev2", 00:12:53.294 "aliases": [ 00:12:53.294 "1164d419-9eeb-42fa-97b8-33c2dd696005" 00:12:53.294 ], 00:12:53.294 "product_name": "Malloc disk", 00:12:53.294 "block_size": 512, 00:12:53.294 "num_blocks": 65536, 00:12:53.294 "uuid": "1164d419-9eeb-42fa-97b8-33c2dd696005", 00:12:53.294 "assigned_rate_limits": { 00:12:53.294 "rw_ios_per_sec": 0, 00:12:53.294 "rw_mbytes_per_sec": 0, 00:12:53.294 "r_mbytes_per_sec": 0, 00:12:53.294 "w_mbytes_per_sec": 0 00:12:53.294 }, 00:12:53.294 "claimed": true, 00:12:53.294 "claim_type": "exclusive_write", 00:12:53.294 "zoned": false, 00:12:53.294 "supported_io_types": { 00:12:53.294 "read": true, 00:12:53.294 "write": true, 00:12:53.294 "unmap": true, 00:12:53.294 "flush": true, 00:12:53.294 "reset": true, 00:12:53.294 "nvme_admin": false, 00:12:53.294 "nvme_io": false, 00:12:53.294 "nvme_io_md": false, 00:12:53.294 "write_zeroes": true, 00:12:53.294 "zcopy": true, 00:12:53.294 "get_zone_info": false, 00:12:53.294 "zone_management": false, 00:12:53.294 "zone_append": false, 00:12:53.294 "compare": false, 00:12:53.294 "compare_and_write": false, 00:12:53.294 "abort": true, 00:12:53.294 "seek_hole": false, 00:12:53.294 "seek_data": false, 00:12:53.294 "copy": true, 00:12:53.294 "nvme_iov_md": false 00:12:53.294 }, 00:12:53.294 "memory_domains": [ 00:12:53.294 { 00:12:53.294 "dma_device_id": "system", 00:12:53.294 "dma_device_type": 1 00:12:53.294 }, 00:12:53.294 { 00:12:53.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.294 "dma_device_type": 2 00:12:53.294 } 00:12:53.294 ], 00:12:53.294 "driver_specific": {} 00:12:53.294 }' 00:12:53.294 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.553 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.554 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.812 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.812 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.812 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:53.812 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.071 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.071 "name": "BaseBdev3", 00:12:54.071 "aliases": [ 00:12:54.071 "50299eb3-a4c6-408d-851d-9d41d3c42917" 00:12:54.071 ], 00:12:54.071 "product_name": "Malloc disk", 00:12:54.072 "block_size": 512, 00:12:54.072 "num_blocks": 65536, 00:12:54.072 "uuid": "50299eb3-a4c6-408d-851d-9d41d3c42917", 00:12:54.072 "assigned_rate_limits": { 00:12:54.072 "rw_ios_per_sec": 0, 00:12:54.072 "rw_mbytes_per_sec": 0, 00:12:54.072 "r_mbytes_per_sec": 0, 00:12:54.072 "w_mbytes_per_sec": 0 00:12:54.072 }, 00:12:54.072 "claimed": true, 00:12:54.072 "claim_type": "exclusive_write", 00:12:54.072 "zoned": false, 00:12:54.072 "supported_io_types": { 00:12:54.072 "read": true, 00:12:54.072 "write": true, 00:12:54.072 "unmap": true, 00:12:54.072 "flush": true, 00:12:54.072 "reset": true, 00:12:54.072 "nvme_admin": false, 00:12:54.072 "nvme_io": false, 00:12:54.072 "nvme_io_md": false, 00:12:54.072 "write_zeroes": true, 00:12:54.072 "zcopy": true, 00:12:54.072 "get_zone_info": false, 00:12:54.072 "zone_management": false, 00:12:54.072 "zone_append": false, 00:12:54.072 "compare": false, 00:12:54.072 "compare_and_write": false, 00:12:54.072 "abort": true, 00:12:54.072 "seek_hole": false, 00:12:54.072 "seek_data": false, 00:12:54.072 "copy": true, 00:12:54.072 "nvme_iov_md": false 00:12:54.072 }, 00:12:54.072 "memory_domains": [ 00:12:54.072 { 00:12:54.072 "dma_device_id": "system", 00:12:54.072 "dma_device_type": 1 00:12:54.072 }, 00:12:54.072 { 00:12:54.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.072 "dma_device_type": 2 00:12:54.072 } 00:12:54.072 ], 00:12:54.072 "driver_specific": {} 00:12:54.072 }' 00:12:54.072 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.072 11:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.072 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.330 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.330 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.330 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:54.590 [2024-07-25 11:54:40.468926] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.590 [2024-07-25 11:54:40.468957] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.590 [2024-07-25 11:54:40.468995] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.590 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.849 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.849 "name": "Existed_Raid", 00:12:54.849 "uuid": "7f721f22-b7e0-4462-bc77-13943041008f", 00:12:54.849 "strip_size_kb": 64, 00:12:54.849 "state": "offline", 00:12:54.849 "raid_level": "raid0", 00:12:54.849 "superblock": false, 00:12:54.849 "num_base_bdevs": 3, 00:12:54.849 "num_base_bdevs_discovered": 2, 00:12:54.849 "num_base_bdevs_operational": 2, 00:12:54.849 "base_bdevs_list": [ 00:12:54.849 { 00:12:54.849 "name": null, 00:12:54.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.849 "is_configured": false, 00:12:54.849 "data_offset": 0, 00:12:54.849 "data_size": 65536 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "name": "BaseBdev2", 00:12:54.849 "uuid": "1164d419-9eeb-42fa-97b8-33c2dd696005", 00:12:54.849 "is_configured": true, 00:12:54.849 "data_offset": 0, 00:12:54.849 "data_size": 65536 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "name": "BaseBdev3", 00:12:54.850 "uuid": "50299eb3-a4c6-408d-851d-9d41d3c42917", 00:12:54.850 "is_configured": true, 00:12:54.850 "data_offset": 0, 00:12:54.850 "data_size": 65536 00:12:54.850 } 00:12:54.850 ] 00:12:54.850 }' 00:12:54.850 11:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.850 11:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.417 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:55.676 [2024-07-25 11:54:41.729311] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.676 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:55.676 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.676 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.676 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.935 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.935 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.935 11:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:56.194 [2024-07-25 11:54:42.204625] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.194 [2024-07-25 11:54:42.204664] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x111a700 name Existed_Raid, state offline 00:12:56.194 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:56.194 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:56.194 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.194 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:56.453 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:56.453 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:56.453 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:56.453 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:56.453 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:56.453 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.711 BaseBdev2 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.711 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.970 11:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.229 [ 00:12:57.229 { 00:12:57.229 "name": "BaseBdev2", 00:12:57.229 "aliases": [ 00:12:57.229 "daef5bb1-3c67-468e-bd5b-323329db00ff" 00:12:57.229 ], 00:12:57.229 "product_name": "Malloc disk", 00:12:57.229 "block_size": 512, 00:12:57.229 "num_blocks": 65536, 00:12:57.229 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:12:57.229 "assigned_rate_limits": { 00:12:57.229 "rw_ios_per_sec": 0, 00:12:57.229 "rw_mbytes_per_sec": 0, 00:12:57.229 "r_mbytes_per_sec": 0, 00:12:57.229 "w_mbytes_per_sec": 0 00:12:57.229 }, 00:12:57.229 "claimed": false, 00:12:57.229 "zoned": false, 00:12:57.229 "supported_io_types": { 00:12:57.229 "read": true, 00:12:57.229 "write": true, 00:12:57.229 "unmap": true, 00:12:57.229 "flush": true, 00:12:57.229 "reset": true, 00:12:57.229 "nvme_admin": false, 00:12:57.229 "nvme_io": false, 00:12:57.229 "nvme_io_md": false, 00:12:57.229 "write_zeroes": true, 00:12:57.229 "zcopy": true, 00:12:57.229 "get_zone_info": false, 00:12:57.229 "zone_management": false, 00:12:57.229 "zone_append": false, 00:12:57.229 "compare": false, 00:12:57.229 "compare_and_write": false, 00:12:57.229 "abort": true, 00:12:57.229 "seek_hole": false, 00:12:57.229 "seek_data": false, 00:12:57.229 "copy": true, 00:12:57.229 "nvme_iov_md": false 00:12:57.229 }, 00:12:57.229 "memory_domains": [ 00:12:57.229 { 00:12:57.229 "dma_device_id": "system", 00:12:57.229 "dma_device_type": 1 00:12:57.229 }, 00:12:57.229 { 00:12:57.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.229 "dma_device_type": 2 00:12:57.229 } 00:12:57.229 ], 00:12:57.229 "driver_specific": {} 00:12:57.229 } 00:12:57.229 ] 00:12:57.229 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:57.229 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:57.229 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:57.229 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.229 BaseBdev3 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:57.486 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.744 [ 00:12:57.744 { 00:12:57.744 "name": "BaseBdev3", 00:12:57.744 "aliases": [ 00:12:57.744 "18ba8536-6f00-4eea-985a-d2831e76dc85" 00:12:57.744 ], 00:12:57.744 "product_name": "Malloc disk", 00:12:57.744 "block_size": 512, 00:12:57.744 "num_blocks": 65536, 00:12:57.744 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:12:57.744 "assigned_rate_limits": { 00:12:57.744 "rw_ios_per_sec": 0, 00:12:57.744 "rw_mbytes_per_sec": 0, 00:12:57.744 "r_mbytes_per_sec": 0, 00:12:57.744 "w_mbytes_per_sec": 0 00:12:57.744 }, 00:12:57.744 "claimed": false, 00:12:57.744 "zoned": false, 00:12:57.744 "supported_io_types": { 00:12:57.744 "read": true, 00:12:57.744 "write": true, 00:12:57.744 "unmap": true, 00:12:57.744 "flush": true, 00:12:57.744 "reset": true, 00:12:57.744 "nvme_admin": false, 00:12:57.744 "nvme_io": false, 00:12:57.744 "nvme_io_md": false, 00:12:57.744 "write_zeroes": true, 00:12:57.744 "zcopy": true, 00:12:57.744 "get_zone_info": false, 00:12:57.744 "zone_management": false, 00:12:57.744 "zone_append": false, 00:12:57.744 "compare": false, 00:12:57.744 "compare_and_write": false, 00:12:57.744 "abort": true, 00:12:57.744 "seek_hole": false, 00:12:57.744 "seek_data": false, 00:12:57.744 "copy": true, 00:12:57.744 "nvme_iov_md": false 00:12:57.744 }, 00:12:57.744 "memory_domains": [ 00:12:57.744 { 00:12:57.744 "dma_device_id": "system", 00:12:57.744 "dma_device_type": 1 00:12:57.744 }, 00:12:57.744 { 00:12:57.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.744 "dma_device_type": 2 00:12:57.744 } 00:12:57.744 ], 00:12:57.744 "driver_specific": {} 00:12:57.744 } 00:12:57.744 ] 00:12:57.744 11:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:57.744 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:57.744 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:57.744 11:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:58.003 [2024-07-25 11:54:44.035905] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.003 [2024-07-25 11:54:44.035944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.003 [2024-07-25 11:54:44.035961] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.003 [2024-07-25 11:54:44.037209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.003 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.261 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.261 "name": "Existed_Raid", 00:12:58.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.261 "strip_size_kb": 64, 00:12:58.261 "state": "configuring", 00:12:58.261 "raid_level": "raid0", 00:12:58.261 "superblock": false, 00:12:58.261 "num_base_bdevs": 3, 00:12:58.261 "num_base_bdevs_discovered": 2, 00:12:58.261 "num_base_bdevs_operational": 3, 00:12:58.261 "base_bdevs_list": [ 00:12:58.261 { 00:12:58.261 "name": "BaseBdev1", 00:12:58.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.261 "is_configured": false, 00:12:58.261 "data_offset": 0, 00:12:58.261 "data_size": 0 00:12:58.261 }, 00:12:58.261 { 00:12:58.261 "name": "BaseBdev2", 00:12:58.261 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:12:58.261 "is_configured": true, 00:12:58.261 "data_offset": 0, 00:12:58.261 "data_size": 65536 00:12:58.261 }, 00:12:58.261 { 00:12:58.261 "name": "BaseBdev3", 00:12:58.261 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:12:58.261 "is_configured": true, 00:12:58.261 "data_offset": 0, 00:12:58.261 "data_size": 65536 00:12:58.261 } 00:12:58.261 ] 00:12:58.261 }' 00:12:58.261 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.261 11:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.829 11:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:59.087 [2024-07-25 11:54:45.058590] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.087 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.345 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.345 "name": "Existed_Raid", 00:12:59.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.345 "strip_size_kb": 64, 00:12:59.345 "state": "configuring", 00:12:59.345 "raid_level": "raid0", 00:12:59.345 "superblock": false, 00:12:59.345 "num_base_bdevs": 3, 00:12:59.345 "num_base_bdevs_discovered": 1, 00:12:59.345 "num_base_bdevs_operational": 3, 00:12:59.345 "base_bdevs_list": [ 00:12:59.345 { 00:12:59.345 "name": "BaseBdev1", 00:12:59.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.345 "is_configured": false, 00:12:59.345 "data_offset": 0, 00:12:59.345 "data_size": 0 00:12:59.345 }, 00:12:59.345 { 00:12:59.345 "name": null, 00:12:59.345 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:12:59.345 "is_configured": false, 00:12:59.345 "data_offset": 0, 00:12:59.345 "data_size": 65536 00:12:59.345 }, 00:12:59.345 { 00:12:59.345 "name": "BaseBdev3", 00:12:59.346 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:12:59.346 "is_configured": true, 00:12:59.346 "data_offset": 0, 00:12:59.346 "data_size": 65536 00:12:59.346 } 00:12:59.346 ] 00:12:59.346 }' 00:12:59.346 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.346 11:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.913 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.913 11:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:00.172 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:00.172 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.430 [2024-07-25 11:54:46.341067] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.430 BaseBdev1 00:13:00.430 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:00.430 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:00.430 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:00.430 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:00.430 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:00.431 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:00.431 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:00.689 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.689 [ 00:13:00.689 { 00:13:00.689 "name": "BaseBdev1", 00:13:00.689 "aliases": [ 00:13:00.689 "87baff5f-379a-4e0d-9bc3-2abe92318f81" 00:13:00.689 ], 00:13:00.689 "product_name": "Malloc disk", 00:13:00.689 "block_size": 512, 00:13:00.689 "num_blocks": 65536, 00:13:00.689 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:00.689 "assigned_rate_limits": { 00:13:00.689 "rw_ios_per_sec": 0, 00:13:00.689 "rw_mbytes_per_sec": 0, 00:13:00.689 "r_mbytes_per_sec": 0, 00:13:00.689 "w_mbytes_per_sec": 0 00:13:00.689 }, 00:13:00.689 "claimed": true, 00:13:00.689 "claim_type": "exclusive_write", 00:13:00.689 "zoned": false, 00:13:00.689 "supported_io_types": { 00:13:00.689 "read": true, 00:13:00.689 "write": true, 00:13:00.689 "unmap": true, 00:13:00.689 "flush": true, 00:13:00.689 "reset": true, 00:13:00.690 "nvme_admin": false, 00:13:00.690 "nvme_io": false, 00:13:00.690 "nvme_io_md": false, 00:13:00.690 "write_zeroes": true, 00:13:00.690 "zcopy": true, 00:13:00.690 "get_zone_info": false, 00:13:00.690 "zone_management": false, 00:13:00.690 "zone_append": false, 00:13:00.690 "compare": false, 00:13:00.690 "compare_and_write": false, 00:13:00.690 "abort": true, 00:13:00.690 "seek_hole": false, 00:13:00.690 "seek_data": false, 00:13:00.690 "copy": true, 00:13:00.690 "nvme_iov_md": false 00:13:00.690 }, 00:13:00.690 "memory_domains": [ 00:13:00.690 { 00:13:00.690 "dma_device_id": "system", 00:13:00.690 "dma_device_type": 1 00:13:00.690 }, 00:13:00.690 { 00:13:00.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.690 "dma_device_type": 2 00:13:00.690 } 00:13:00.690 ], 00:13:00.690 "driver_specific": {} 00:13:00.690 } 00:13:00.690 ] 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.690 11:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.948 11:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.948 "name": "Existed_Raid", 00:13:00.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.948 "strip_size_kb": 64, 00:13:00.948 "state": "configuring", 00:13:00.948 "raid_level": "raid0", 00:13:00.948 "superblock": false, 00:13:00.948 "num_base_bdevs": 3, 00:13:00.948 "num_base_bdevs_discovered": 2, 00:13:00.948 "num_base_bdevs_operational": 3, 00:13:00.948 "base_bdevs_list": [ 00:13:00.948 { 00:13:00.948 "name": "BaseBdev1", 00:13:00.948 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:00.948 "is_configured": true, 00:13:00.948 "data_offset": 0, 00:13:00.948 "data_size": 65536 00:13:00.948 }, 00:13:00.948 { 00:13:00.948 "name": null, 00:13:00.948 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:00.948 "is_configured": false, 00:13:00.948 "data_offset": 0, 00:13:00.948 "data_size": 65536 00:13:00.948 }, 00:13:00.948 { 00:13:00.948 "name": "BaseBdev3", 00:13:00.948 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:00.948 "is_configured": true, 00:13:00.948 "data_offset": 0, 00:13:00.948 "data_size": 65536 00:13:00.948 } 00:13:00.948 ] 00:13:00.948 }' 00:13:00.948 11:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.948 11:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 11:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.515 11:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:01.802 11:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:01.802 11:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:02.060 [2024-07-25 11:54:48.037662] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.060 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.318 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.318 "name": "Existed_Raid", 00:13:02.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.318 "strip_size_kb": 64, 00:13:02.318 "state": "configuring", 00:13:02.318 "raid_level": "raid0", 00:13:02.318 "superblock": false, 00:13:02.318 "num_base_bdevs": 3, 00:13:02.318 "num_base_bdevs_discovered": 1, 00:13:02.318 "num_base_bdevs_operational": 3, 00:13:02.318 "base_bdevs_list": [ 00:13:02.318 { 00:13:02.318 "name": "BaseBdev1", 00:13:02.318 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:02.318 "is_configured": true, 00:13:02.318 "data_offset": 0, 00:13:02.318 "data_size": 65536 00:13:02.318 }, 00:13:02.318 { 00:13:02.318 "name": null, 00:13:02.318 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:02.318 "is_configured": false, 00:13:02.318 "data_offset": 0, 00:13:02.318 "data_size": 65536 00:13:02.318 }, 00:13:02.318 { 00:13:02.318 "name": null, 00:13:02.318 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:02.318 "is_configured": false, 00:13:02.318 "data_offset": 0, 00:13:02.318 "data_size": 65536 00:13:02.318 } 00:13:02.318 ] 00:13:02.318 }' 00:13:02.318 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.318 11:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.884 11:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:03.142 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:03.142 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:03.399 [2024-07-25 11:54:49.288976] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.399 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.400 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.400 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.400 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.656 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.656 "name": "Existed_Raid", 00:13:03.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.656 "strip_size_kb": 64, 00:13:03.656 "state": "configuring", 00:13:03.656 "raid_level": "raid0", 00:13:03.656 "superblock": false, 00:13:03.656 "num_base_bdevs": 3, 00:13:03.656 "num_base_bdevs_discovered": 2, 00:13:03.656 "num_base_bdevs_operational": 3, 00:13:03.656 "base_bdevs_list": [ 00:13:03.656 { 00:13:03.656 "name": "BaseBdev1", 00:13:03.656 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:03.656 "is_configured": true, 00:13:03.656 "data_offset": 0, 00:13:03.656 "data_size": 65536 00:13:03.656 }, 00:13:03.656 { 00:13:03.657 "name": null, 00:13:03.657 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:03.657 "is_configured": false, 00:13:03.657 "data_offset": 0, 00:13:03.657 "data_size": 65536 00:13:03.657 }, 00:13:03.657 { 00:13:03.657 "name": "BaseBdev3", 00:13:03.657 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:03.657 "is_configured": true, 00:13:03.657 "data_offset": 0, 00:13:03.657 "data_size": 65536 00:13:03.657 } 00:13:03.657 ] 00:13:03.657 }' 00:13:03.657 11:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.657 11:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.222 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.222 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:04.222 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:04.222 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:04.479 [2024-07-25 11:54:50.540329] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.479 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.736 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.736 "name": "Existed_Raid", 00:13:04.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.737 "strip_size_kb": 64, 00:13:04.737 "state": "configuring", 00:13:04.737 "raid_level": "raid0", 00:13:04.737 "superblock": false, 00:13:04.737 "num_base_bdevs": 3, 00:13:04.737 "num_base_bdevs_discovered": 1, 00:13:04.737 "num_base_bdevs_operational": 3, 00:13:04.737 "base_bdevs_list": [ 00:13:04.737 { 00:13:04.737 "name": null, 00:13:04.737 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:04.737 "is_configured": false, 00:13:04.737 "data_offset": 0, 00:13:04.737 "data_size": 65536 00:13:04.737 }, 00:13:04.737 { 00:13:04.737 "name": null, 00:13:04.737 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:04.737 "is_configured": false, 00:13:04.737 "data_offset": 0, 00:13:04.737 "data_size": 65536 00:13:04.737 }, 00:13:04.737 { 00:13:04.737 "name": "BaseBdev3", 00:13:04.737 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:04.737 "is_configured": true, 00:13:04.737 "data_offset": 0, 00:13:04.737 "data_size": 65536 00:13:04.737 } 00:13:04.737 ] 00:13:04.737 }' 00:13:04.737 11:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.737 11:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 11:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.301 11:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.559 11:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:05.559 11:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.126 [2024-07-25 11:54:52.074475] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.126 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.385 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.385 "name": "Existed_Raid", 00:13:06.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.385 "strip_size_kb": 64, 00:13:06.385 "state": "configuring", 00:13:06.385 "raid_level": "raid0", 00:13:06.385 "superblock": false, 00:13:06.385 "num_base_bdevs": 3, 00:13:06.385 "num_base_bdevs_discovered": 2, 00:13:06.385 "num_base_bdevs_operational": 3, 00:13:06.385 "base_bdevs_list": [ 00:13:06.385 { 00:13:06.385 "name": null, 00:13:06.385 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:06.385 "is_configured": false, 00:13:06.385 "data_offset": 0, 00:13:06.385 "data_size": 65536 00:13:06.385 }, 00:13:06.385 { 00:13:06.385 "name": "BaseBdev2", 00:13:06.385 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:06.385 "is_configured": true, 00:13:06.385 "data_offset": 0, 00:13:06.385 "data_size": 65536 00:13:06.385 }, 00:13:06.385 { 00:13:06.385 "name": "BaseBdev3", 00:13:06.385 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:06.385 "is_configured": true, 00:13:06.385 "data_offset": 0, 00:13:06.385 "data_size": 65536 00:13:06.385 } 00:13:06.385 ] 00:13:06.385 }' 00:13:06.385 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.385 11:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.950 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.950 11:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.209 11:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:07.209 11:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.209 11:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.465 11:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 87baff5f-379a-4e0d-9bc3-2abe92318f81 00:13:07.722 [2024-07-25 11:54:53.589597] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.722 [2024-07-25 11:54:53.589631] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x111ba60 00:13:07.722 [2024-07-25 11:54:53.589639] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:07.722 [2024-07-25 11:54:53.589812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12c21a0 00:13:07.722 [2024-07-25 11:54:53.589916] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x111ba60 00:13:07.722 [2024-07-25 11:54:53.589925] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x111ba60 00:13:07.722 [2024-07-25 11:54:53.590073] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.722 NewBaseBdev 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.722 11:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.981 [ 00:13:07.981 { 00:13:07.981 "name": "NewBaseBdev", 00:13:07.981 "aliases": [ 00:13:07.981 "87baff5f-379a-4e0d-9bc3-2abe92318f81" 00:13:07.981 ], 00:13:07.981 "product_name": "Malloc disk", 00:13:07.981 "block_size": 512, 00:13:07.981 "num_blocks": 65536, 00:13:07.981 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:07.981 "assigned_rate_limits": { 00:13:07.981 "rw_ios_per_sec": 0, 00:13:07.981 "rw_mbytes_per_sec": 0, 00:13:07.981 "r_mbytes_per_sec": 0, 00:13:07.981 "w_mbytes_per_sec": 0 00:13:07.981 }, 00:13:07.981 "claimed": true, 00:13:07.981 "claim_type": "exclusive_write", 00:13:07.981 "zoned": false, 00:13:07.981 "supported_io_types": { 00:13:07.981 "read": true, 00:13:07.981 "write": true, 00:13:07.981 "unmap": true, 00:13:07.981 "flush": true, 00:13:07.981 "reset": true, 00:13:07.981 "nvme_admin": false, 00:13:07.981 "nvme_io": false, 00:13:07.981 "nvme_io_md": false, 00:13:07.981 "write_zeroes": true, 00:13:07.981 "zcopy": true, 00:13:07.981 "get_zone_info": false, 00:13:07.981 "zone_management": false, 00:13:07.981 "zone_append": false, 00:13:07.981 "compare": false, 00:13:07.981 "compare_and_write": false, 00:13:07.981 "abort": true, 00:13:07.981 "seek_hole": false, 00:13:07.981 "seek_data": false, 00:13:07.981 "copy": true, 00:13:07.981 "nvme_iov_md": false 00:13:07.981 }, 00:13:07.981 "memory_domains": [ 00:13:07.981 { 00:13:07.981 "dma_device_id": "system", 00:13:07.981 "dma_device_type": 1 00:13:07.981 }, 00:13:07.981 { 00:13:07.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.981 "dma_device_type": 2 00:13:07.981 } 00:13:07.981 ], 00:13:07.981 "driver_specific": {} 00:13:07.981 } 00:13:07.981 ] 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.981 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.240 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.240 "name": "Existed_Raid", 00:13:08.240 "uuid": "0ab4ace4-298c-49d9-ab3c-0c225d3d4c5c", 00:13:08.240 "strip_size_kb": 64, 00:13:08.240 "state": "online", 00:13:08.240 "raid_level": "raid0", 00:13:08.240 "superblock": false, 00:13:08.240 "num_base_bdevs": 3, 00:13:08.240 "num_base_bdevs_discovered": 3, 00:13:08.240 "num_base_bdevs_operational": 3, 00:13:08.240 "base_bdevs_list": [ 00:13:08.240 { 00:13:08.240 "name": "NewBaseBdev", 00:13:08.240 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:08.240 "is_configured": true, 00:13:08.240 "data_offset": 0, 00:13:08.240 "data_size": 65536 00:13:08.240 }, 00:13:08.240 { 00:13:08.240 "name": "BaseBdev2", 00:13:08.240 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:08.240 "is_configured": true, 00:13:08.240 "data_offset": 0, 00:13:08.240 "data_size": 65536 00:13:08.240 }, 00:13:08.240 { 00:13:08.240 "name": "BaseBdev3", 00:13:08.240 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:08.240 "is_configured": true, 00:13:08.240 "data_offset": 0, 00:13:08.240 "data_size": 65536 00:13:08.240 } 00:13:08.240 ] 00:13:08.240 }' 00:13:08.240 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.240 11:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:08.807 11:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:09.066 [2024-07-25 11:54:55.025648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.066 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:09.066 "name": "Existed_Raid", 00:13:09.066 "aliases": [ 00:13:09.066 "0ab4ace4-298c-49d9-ab3c-0c225d3d4c5c" 00:13:09.066 ], 00:13:09.066 "product_name": "Raid Volume", 00:13:09.066 "block_size": 512, 00:13:09.066 "num_blocks": 196608, 00:13:09.066 "uuid": "0ab4ace4-298c-49d9-ab3c-0c225d3d4c5c", 00:13:09.066 "assigned_rate_limits": { 00:13:09.066 "rw_ios_per_sec": 0, 00:13:09.066 "rw_mbytes_per_sec": 0, 00:13:09.066 "r_mbytes_per_sec": 0, 00:13:09.066 "w_mbytes_per_sec": 0 00:13:09.066 }, 00:13:09.066 "claimed": false, 00:13:09.066 "zoned": false, 00:13:09.066 "supported_io_types": { 00:13:09.066 "read": true, 00:13:09.066 "write": true, 00:13:09.066 "unmap": true, 00:13:09.066 "flush": true, 00:13:09.066 "reset": true, 00:13:09.066 "nvme_admin": false, 00:13:09.066 "nvme_io": false, 00:13:09.066 "nvme_io_md": false, 00:13:09.066 "write_zeroes": true, 00:13:09.066 "zcopy": false, 00:13:09.066 "get_zone_info": false, 00:13:09.066 "zone_management": false, 00:13:09.066 "zone_append": false, 00:13:09.066 "compare": false, 00:13:09.066 "compare_and_write": false, 00:13:09.066 "abort": false, 00:13:09.066 "seek_hole": false, 00:13:09.066 "seek_data": false, 00:13:09.066 "copy": false, 00:13:09.066 "nvme_iov_md": false 00:13:09.066 }, 00:13:09.066 "memory_domains": [ 00:13:09.066 { 00:13:09.066 "dma_device_id": "system", 00:13:09.066 "dma_device_type": 1 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.066 "dma_device_type": 2 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "dma_device_id": "system", 00:13:09.066 "dma_device_type": 1 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.066 "dma_device_type": 2 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "dma_device_id": "system", 00:13:09.066 "dma_device_type": 1 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.066 "dma_device_type": 2 00:13:09.066 } 00:13:09.066 ], 00:13:09.066 "driver_specific": { 00:13:09.066 "raid": { 00:13:09.066 "uuid": "0ab4ace4-298c-49d9-ab3c-0c225d3d4c5c", 00:13:09.066 "strip_size_kb": 64, 00:13:09.066 "state": "online", 00:13:09.066 "raid_level": "raid0", 00:13:09.066 "superblock": false, 00:13:09.066 "num_base_bdevs": 3, 00:13:09.066 "num_base_bdevs_discovered": 3, 00:13:09.066 "num_base_bdevs_operational": 3, 00:13:09.066 "base_bdevs_list": [ 00:13:09.066 { 00:13:09.066 "name": "NewBaseBdev", 00:13:09.066 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:09.066 "is_configured": true, 00:13:09.066 "data_offset": 0, 00:13:09.066 "data_size": 65536 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "name": "BaseBdev2", 00:13:09.066 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:09.066 "is_configured": true, 00:13:09.066 "data_offset": 0, 00:13:09.066 "data_size": 65536 00:13:09.066 }, 00:13:09.066 { 00:13:09.066 "name": "BaseBdev3", 00:13:09.066 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:09.066 "is_configured": true, 00:13:09.066 "data_offset": 0, 00:13:09.066 "data_size": 65536 00:13:09.066 } 00:13:09.066 ] 00:13:09.066 } 00:13:09.066 } 00:13:09.066 }' 00:13:09.066 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.066 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:09.066 BaseBdev2 00:13:09.066 BaseBdev3' 00:13:09.066 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.066 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:09.066 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:09.325 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:09.325 "name": "NewBaseBdev", 00:13:09.325 "aliases": [ 00:13:09.325 "87baff5f-379a-4e0d-9bc3-2abe92318f81" 00:13:09.325 ], 00:13:09.325 "product_name": "Malloc disk", 00:13:09.325 "block_size": 512, 00:13:09.325 "num_blocks": 65536, 00:13:09.325 "uuid": "87baff5f-379a-4e0d-9bc3-2abe92318f81", 00:13:09.325 "assigned_rate_limits": { 00:13:09.325 "rw_ios_per_sec": 0, 00:13:09.325 "rw_mbytes_per_sec": 0, 00:13:09.325 "r_mbytes_per_sec": 0, 00:13:09.325 "w_mbytes_per_sec": 0 00:13:09.325 }, 00:13:09.325 "claimed": true, 00:13:09.325 "claim_type": "exclusive_write", 00:13:09.325 "zoned": false, 00:13:09.325 "supported_io_types": { 00:13:09.325 "read": true, 00:13:09.325 "write": true, 00:13:09.325 "unmap": true, 00:13:09.325 "flush": true, 00:13:09.325 "reset": true, 00:13:09.325 "nvme_admin": false, 00:13:09.325 "nvme_io": false, 00:13:09.325 "nvme_io_md": false, 00:13:09.325 "write_zeroes": true, 00:13:09.325 "zcopy": true, 00:13:09.325 "get_zone_info": false, 00:13:09.325 "zone_management": false, 00:13:09.325 "zone_append": false, 00:13:09.325 "compare": false, 00:13:09.325 "compare_and_write": false, 00:13:09.325 "abort": true, 00:13:09.325 "seek_hole": false, 00:13:09.325 "seek_data": false, 00:13:09.325 "copy": true, 00:13:09.325 "nvme_iov_md": false 00:13:09.325 }, 00:13:09.325 "memory_domains": [ 00:13:09.325 { 00:13:09.325 "dma_device_id": "system", 00:13:09.325 "dma_device_type": 1 00:13:09.325 }, 00:13:09.325 { 00:13:09.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.325 "dma_device_type": 2 00:13:09.325 } 00:13:09.325 ], 00:13:09.325 "driver_specific": {} 00:13:09.325 }' 00:13:09.325 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.325 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.325 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:09.325 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:09.583 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:09.842 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:09.842 "name": "BaseBdev2", 00:13:09.842 "aliases": [ 00:13:09.842 "daef5bb1-3c67-468e-bd5b-323329db00ff" 00:13:09.842 ], 00:13:09.842 "product_name": "Malloc disk", 00:13:09.842 "block_size": 512, 00:13:09.842 "num_blocks": 65536, 00:13:09.842 "uuid": "daef5bb1-3c67-468e-bd5b-323329db00ff", 00:13:09.842 "assigned_rate_limits": { 00:13:09.842 "rw_ios_per_sec": 0, 00:13:09.842 "rw_mbytes_per_sec": 0, 00:13:09.842 "r_mbytes_per_sec": 0, 00:13:09.842 "w_mbytes_per_sec": 0 00:13:09.842 }, 00:13:09.842 "claimed": true, 00:13:09.842 "claim_type": "exclusive_write", 00:13:09.842 "zoned": false, 00:13:09.842 "supported_io_types": { 00:13:09.842 "read": true, 00:13:09.842 "write": true, 00:13:09.842 "unmap": true, 00:13:09.842 "flush": true, 00:13:09.842 "reset": true, 00:13:09.842 "nvme_admin": false, 00:13:09.842 "nvme_io": false, 00:13:09.842 "nvme_io_md": false, 00:13:09.842 "write_zeroes": true, 00:13:09.842 "zcopy": true, 00:13:09.842 "get_zone_info": false, 00:13:09.842 "zone_management": false, 00:13:09.842 "zone_append": false, 00:13:09.842 "compare": false, 00:13:09.842 "compare_and_write": false, 00:13:09.842 "abort": true, 00:13:09.842 "seek_hole": false, 00:13:09.842 "seek_data": false, 00:13:09.842 "copy": true, 00:13:09.842 "nvme_iov_md": false 00:13:09.842 }, 00:13:09.842 "memory_domains": [ 00:13:09.842 { 00:13:09.842 "dma_device_id": "system", 00:13:09.842 "dma_device_type": 1 00:13:09.842 }, 00:13:09.842 { 00:13:09.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.842 "dma_device_type": 2 00:13:09.842 } 00:13:09.842 ], 00:13:09.842 "driver_specific": {} 00:13:09.842 }' 00:13:09.842 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.842 11:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.100 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.358 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.358 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.358 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:10.358 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.358 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.358 "name": "BaseBdev3", 00:13:10.358 "aliases": [ 00:13:10.358 "18ba8536-6f00-4eea-985a-d2831e76dc85" 00:13:10.358 ], 00:13:10.358 "product_name": "Malloc disk", 00:13:10.358 "block_size": 512, 00:13:10.358 "num_blocks": 65536, 00:13:10.358 "uuid": "18ba8536-6f00-4eea-985a-d2831e76dc85", 00:13:10.358 "assigned_rate_limits": { 00:13:10.358 "rw_ios_per_sec": 0, 00:13:10.358 "rw_mbytes_per_sec": 0, 00:13:10.358 "r_mbytes_per_sec": 0, 00:13:10.358 "w_mbytes_per_sec": 0 00:13:10.358 }, 00:13:10.358 "claimed": true, 00:13:10.358 "claim_type": "exclusive_write", 00:13:10.358 "zoned": false, 00:13:10.358 "supported_io_types": { 00:13:10.358 "read": true, 00:13:10.358 "write": true, 00:13:10.358 "unmap": true, 00:13:10.358 "flush": true, 00:13:10.358 "reset": true, 00:13:10.358 "nvme_admin": false, 00:13:10.358 "nvme_io": false, 00:13:10.358 "nvme_io_md": false, 00:13:10.358 "write_zeroes": true, 00:13:10.358 "zcopy": true, 00:13:10.358 "get_zone_info": false, 00:13:10.358 "zone_management": false, 00:13:10.358 "zone_append": false, 00:13:10.358 "compare": false, 00:13:10.358 "compare_and_write": false, 00:13:10.358 "abort": true, 00:13:10.358 "seek_hole": false, 00:13:10.358 "seek_data": false, 00:13:10.358 "copy": true, 00:13:10.358 "nvme_iov_md": false 00:13:10.358 }, 00:13:10.358 "memory_domains": [ 00:13:10.358 { 00:13:10.358 "dma_device_id": "system", 00:13:10.358 "dma_device_type": 1 00:13:10.358 }, 00:13:10.358 { 00:13:10.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.358 "dma_device_type": 2 00:13:10.358 } 00:13:10.358 ], 00:13:10.358 "driver_specific": {} 00:13:10.358 }' 00:13:10.358 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.617 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.875 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.875 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.875 11:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:11.134 [2024-07-25 11:54:56.994600] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.134 [2024-07-25 11:54:56.994625] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.134 [2024-07-25 11:54:56.994680] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.134 [2024-07-25 11:54:56.994727] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.134 [2024-07-25 11:54:56.994738] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x111ba60 name Existed_Raid, state offline 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4117962 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4117962 ']' 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4117962 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4117962 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4117962' 00:13:11.134 killing process with pid 4117962 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4117962 00:13:11.134 [2024-07-25 11:54:57.069156] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.134 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4117962 00:13:11.134 [2024-07-25 11:54:57.092040] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:11.393 00:13:11.393 real 0m26.984s 00:13:11.393 user 0m49.536s 00:13:11.393 sys 0m4.867s 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.393 ************************************ 00:13:11.393 END TEST raid_state_function_test 00:13:11.393 ************************************ 00:13:11.393 11:54:57 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:13:11.393 11:54:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:11.393 11:54:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.393 11:54:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.393 ************************************ 00:13:11.393 START TEST raid_state_function_test_sb 00:13:11.393 ************************************ 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4123068 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4123068' 00:13:11.393 Process raid pid: 4123068 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4123068 /var/tmp/spdk-raid.sock 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4123068 ']' 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.393 11:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.393 [2024-07-25 11:54:57.435956] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:11.393 [2024-07-25 11:54:57.436012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:11.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.393 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.652 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:11.652 [2024-07-25 11:54:57.569615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.652 [2024-07-25 11:54:57.657548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.652 [2024-07-25 11:54:57.723271] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.652 [2024-07-25 11:54:57.723303] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.218 11:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.218 11:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:12.218 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:12.477 [2024-07-25 11:54:58.538323] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.477 [2024-07-25 11:54:58.538361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.477 [2024-07-25 11:54:58.538372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.477 [2024-07-25 11:54:58.538383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.477 [2024-07-25 11:54:58.538390] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.477 [2024-07-25 11:54:58.538400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.477 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.736 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.736 "name": "Existed_Raid", 00:13:12.736 "uuid": "ed579a44-82af-43da-b33d-3c5748716ef2", 00:13:12.736 "strip_size_kb": 64, 00:13:12.736 "state": "configuring", 00:13:12.736 "raid_level": "raid0", 00:13:12.736 "superblock": true, 00:13:12.736 "num_base_bdevs": 3, 00:13:12.736 "num_base_bdevs_discovered": 0, 00:13:12.736 "num_base_bdevs_operational": 3, 00:13:12.736 "base_bdevs_list": [ 00:13:12.736 { 00:13:12.736 "name": "BaseBdev1", 00:13:12.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.736 "is_configured": false, 00:13:12.736 "data_offset": 0, 00:13:12.736 "data_size": 0 00:13:12.736 }, 00:13:12.736 { 00:13:12.736 "name": "BaseBdev2", 00:13:12.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.736 "is_configured": false, 00:13:12.736 "data_offset": 0, 00:13:12.736 "data_size": 0 00:13:12.736 }, 00:13:12.736 { 00:13:12.736 "name": "BaseBdev3", 00:13:12.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.736 "is_configured": false, 00:13:12.736 "data_offset": 0, 00:13:12.736 "data_size": 0 00:13:12.736 } 00:13:12.736 ] 00:13:12.736 }' 00:13:12.736 11:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.736 11:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.304 11:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:13.563 [2024-07-25 11:54:59.528780] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.563 [2024-07-25 11:54:59.528811] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x174bf40 name Existed_Raid, state configuring 00:13:13.563 11:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:13.821 [2024-07-25 11:54:59.757399] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.821 [2024-07-25 11:54:59.757430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.821 [2024-07-25 11:54:59.757439] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.821 [2024-07-25 11:54:59.757450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.821 [2024-07-25 11:54:59.757457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.821 [2024-07-25 11:54:59.757467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.821 11:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.080 [2024-07-25 11:54:59.995515] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.080 BaseBdev1 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:14.080 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:14.338 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.597 [ 00:13:14.597 { 00:13:14.597 "name": "BaseBdev1", 00:13:14.597 "aliases": [ 00:13:14.597 "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef" 00:13:14.597 ], 00:13:14.597 "product_name": "Malloc disk", 00:13:14.597 "block_size": 512, 00:13:14.597 "num_blocks": 65536, 00:13:14.597 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:14.597 "assigned_rate_limits": { 00:13:14.597 "rw_ios_per_sec": 0, 00:13:14.597 "rw_mbytes_per_sec": 0, 00:13:14.597 "r_mbytes_per_sec": 0, 00:13:14.597 "w_mbytes_per_sec": 0 00:13:14.597 }, 00:13:14.597 "claimed": true, 00:13:14.597 "claim_type": "exclusive_write", 00:13:14.597 "zoned": false, 00:13:14.597 "supported_io_types": { 00:13:14.597 "read": true, 00:13:14.597 "write": true, 00:13:14.597 "unmap": true, 00:13:14.597 "flush": true, 00:13:14.597 "reset": true, 00:13:14.597 "nvme_admin": false, 00:13:14.597 "nvme_io": false, 00:13:14.597 "nvme_io_md": false, 00:13:14.597 "write_zeroes": true, 00:13:14.597 "zcopy": true, 00:13:14.597 "get_zone_info": false, 00:13:14.597 "zone_management": false, 00:13:14.597 "zone_append": false, 00:13:14.597 "compare": false, 00:13:14.597 "compare_and_write": false, 00:13:14.597 "abort": true, 00:13:14.597 "seek_hole": false, 00:13:14.597 "seek_data": false, 00:13:14.597 "copy": true, 00:13:14.597 "nvme_iov_md": false 00:13:14.597 }, 00:13:14.597 "memory_domains": [ 00:13:14.597 { 00:13:14.597 "dma_device_id": "system", 00:13:14.597 "dma_device_type": 1 00:13:14.597 }, 00:13:14.597 { 00:13:14.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.597 "dma_device_type": 2 00:13:14.597 } 00:13:14.597 ], 00:13:14.597 "driver_specific": {} 00:13:14.597 } 00:13:14.597 ] 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.597 "name": "Existed_Raid", 00:13:14.597 "uuid": "75fd3c31-6106-4eee-a426-ab5b254078dd", 00:13:14.597 "strip_size_kb": 64, 00:13:14.597 "state": "configuring", 00:13:14.597 "raid_level": "raid0", 00:13:14.597 "superblock": true, 00:13:14.597 "num_base_bdevs": 3, 00:13:14.597 "num_base_bdevs_discovered": 1, 00:13:14.597 "num_base_bdevs_operational": 3, 00:13:14.597 "base_bdevs_list": [ 00:13:14.597 { 00:13:14.597 "name": "BaseBdev1", 00:13:14.597 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:14.597 "is_configured": true, 00:13:14.597 "data_offset": 2048, 00:13:14.597 "data_size": 63488 00:13:14.597 }, 00:13:14.597 { 00:13:14.597 "name": "BaseBdev2", 00:13:14.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.597 "is_configured": false, 00:13:14.597 "data_offset": 0, 00:13:14.597 "data_size": 0 00:13:14.597 }, 00:13:14.597 { 00:13:14.597 "name": "BaseBdev3", 00:13:14.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.597 "is_configured": false, 00:13:14.597 "data_offset": 0, 00:13:14.597 "data_size": 0 00:13:14.597 } 00:13:14.597 ] 00:13:14.597 }' 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.597 11:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.164 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:15.423 [2024-07-25 11:55:01.455364] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.423 [2024-07-25 11:55:01.455400] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x174b810 name Existed_Raid, state configuring 00:13:15.423 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:15.681 [2024-07-25 11:55:01.684005] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.681 [2024-07-25 11:55:01.685394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.681 [2024-07-25 11:55:01.685425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.681 [2024-07-25 11:55:01.685439] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.681 [2024-07-25 11:55:01.685450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.681 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.973 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.973 "name": "Existed_Raid", 00:13:15.973 "uuid": "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651", 00:13:15.973 "strip_size_kb": 64, 00:13:15.973 "state": "configuring", 00:13:15.973 "raid_level": "raid0", 00:13:15.973 "superblock": true, 00:13:15.973 "num_base_bdevs": 3, 00:13:15.973 "num_base_bdevs_discovered": 1, 00:13:15.973 "num_base_bdevs_operational": 3, 00:13:15.973 "base_bdevs_list": [ 00:13:15.973 { 00:13:15.973 "name": "BaseBdev1", 00:13:15.973 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:15.973 "is_configured": true, 00:13:15.973 "data_offset": 2048, 00:13:15.973 "data_size": 63488 00:13:15.973 }, 00:13:15.973 { 00:13:15.973 "name": "BaseBdev2", 00:13:15.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.973 "is_configured": false, 00:13:15.973 "data_offset": 0, 00:13:15.973 "data_size": 0 00:13:15.973 }, 00:13:15.973 { 00:13:15.973 "name": "BaseBdev3", 00:13:15.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.973 "is_configured": false, 00:13:15.973 "data_offset": 0, 00:13:15.973 "data_size": 0 00:13:15.973 } 00:13:15.973 ] 00:13:15.973 }' 00:13:15.973 11:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.973 11:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:16.540 [2024-07-25 11:55:02.633661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.540 BaseBdev2 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.540 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.798 11:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.057 [ 00:13:17.057 { 00:13:17.057 "name": "BaseBdev2", 00:13:17.057 "aliases": [ 00:13:17.057 "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72" 00:13:17.057 ], 00:13:17.057 "product_name": "Malloc disk", 00:13:17.057 "block_size": 512, 00:13:17.057 "num_blocks": 65536, 00:13:17.057 "uuid": "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72", 00:13:17.057 "assigned_rate_limits": { 00:13:17.057 "rw_ios_per_sec": 0, 00:13:17.057 "rw_mbytes_per_sec": 0, 00:13:17.057 "r_mbytes_per_sec": 0, 00:13:17.057 "w_mbytes_per_sec": 0 00:13:17.057 }, 00:13:17.057 "claimed": true, 00:13:17.057 "claim_type": "exclusive_write", 00:13:17.057 "zoned": false, 00:13:17.057 "supported_io_types": { 00:13:17.057 "read": true, 00:13:17.057 "write": true, 00:13:17.057 "unmap": true, 00:13:17.057 "flush": true, 00:13:17.057 "reset": true, 00:13:17.057 "nvme_admin": false, 00:13:17.057 "nvme_io": false, 00:13:17.057 "nvme_io_md": false, 00:13:17.057 "write_zeroes": true, 00:13:17.057 "zcopy": true, 00:13:17.057 "get_zone_info": false, 00:13:17.057 "zone_management": false, 00:13:17.057 "zone_append": false, 00:13:17.057 "compare": false, 00:13:17.057 "compare_and_write": false, 00:13:17.057 "abort": true, 00:13:17.057 "seek_hole": false, 00:13:17.057 "seek_data": false, 00:13:17.057 "copy": true, 00:13:17.057 "nvme_iov_md": false 00:13:17.057 }, 00:13:17.057 "memory_domains": [ 00:13:17.057 { 00:13:17.057 "dma_device_id": "system", 00:13:17.057 "dma_device_type": 1 00:13:17.057 }, 00:13:17.057 { 00:13:17.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.057 "dma_device_type": 2 00:13:17.057 } 00:13:17.057 ], 00:13:17.057 "driver_specific": {} 00:13:17.057 } 00:13:17.057 ] 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.057 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.315 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.315 "name": "Existed_Raid", 00:13:17.315 "uuid": "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651", 00:13:17.315 "strip_size_kb": 64, 00:13:17.315 "state": "configuring", 00:13:17.315 "raid_level": "raid0", 00:13:17.315 "superblock": true, 00:13:17.315 "num_base_bdevs": 3, 00:13:17.315 "num_base_bdevs_discovered": 2, 00:13:17.315 "num_base_bdevs_operational": 3, 00:13:17.315 "base_bdevs_list": [ 00:13:17.315 { 00:13:17.315 "name": "BaseBdev1", 00:13:17.315 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:17.315 "is_configured": true, 00:13:17.315 "data_offset": 2048, 00:13:17.315 "data_size": 63488 00:13:17.315 }, 00:13:17.315 { 00:13:17.315 "name": "BaseBdev2", 00:13:17.315 "uuid": "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72", 00:13:17.315 "is_configured": true, 00:13:17.315 "data_offset": 2048, 00:13:17.315 "data_size": 63488 00:13:17.315 }, 00:13:17.315 { 00:13:17.315 "name": "BaseBdev3", 00:13:17.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.315 "is_configured": false, 00:13:17.315 "data_offset": 0, 00:13:17.315 "data_size": 0 00:13:17.315 } 00:13:17.315 ] 00:13:17.315 }' 00:13:17.315 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.315 11:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.880 11:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.139 [2024-07-25 11:55:04.072557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.139 [2024-07-25 11:55:04.072697] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x174c700 00:13:18.139 [2024-07-25 11:55:04.072709] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:18.139 [2024-07-25 11:55:04.072867] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x174c3d0 00:13:18.139 [2024-07-25 11:55:04.072976] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x174c700 00:13:18.139 [2024-07-25 11:55:04.072985] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x174c700 00:13:18.139 [2024-07-25 11:55:04.073069] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.139 BaseBdev3 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:18.139 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.397 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.656 [ 00:13:18.656 { 00:13:18.656 "name": "BaseBdev3", 00:13:18.656 "aliases": [ 00:13:18.656 "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22" 00:13:18.656 ], 00:13:18.656 "product_name": "Malloc disk", 00:13:18.656 "block_size": 512, 00:13:18.656 "num_blocks": 65536, 00:13:18.656 "uuid": "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22", 00:13:18.656 "assigned_rate_limits": { 00:13:18.656 "rw_ios_per_sec": 0, 00:13:18.656 "rw_mbytes_per_sec": 0, 00:13:18.656 "r_mbytes_per_sec": 0, 00:13:18.656 "w_mbytes_per_sec": 0 00:13:18.656 }, 00:13:18.656 "claimed": true, 00:13:18.656 "claim_type": "exclusive_write", 00:13:18.656 "zoned": false, 00:13:18.656 "supported_io_types": { 00:13:18.656 "read": true, 00:13:18.656 "write": true, 00:13:18.656 "unmap": true, 00:13:18.656 "flush": true, 00:13:18.656 "reset": true, 00:13:18.656 "nvme_admin": false, 00:13:18.656 "nvme_io": false, 00:13:18.656 "nvme_io_md": false, 00:13:18.656 "write_zeroes": true, 00:13:18.656 "zcopy": true, 00:13:18.656 "get_zone_info": false, 00:13:18.656 "zone_management": false, 00:13:18.656 "zone_append": false, 00:13:18.656 "compare": false, 00:13:18.656 "compare_and_write": false, 00:13:18.656 "abort": true, 00:13:18.657 "seek_hole": false, 00:13:18.657 "seek_data": false, 00:13:18.657 "copy": true, 00:13:18.657 "nvme_iov_md": false 00:13:18.657 }, 00:13:18.657 "memory_domains": [ 00:13:18.657 { 00:13:18.657 "dma_device_id": "system", 00:13:18.657 "dma_device_type": 1 00:13:18.657 }, 00:13:18.657 { 00:13:18.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.657 "dma_device_type": 2 00:13:18.657 } 00:13:18.657 ], 00:13:18.657 "driver_specific": {} 00:13:18.657 } 00:13:18.657 ] 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.657 "name": "Existed_Raid", 00:13:18.657 "uuid": "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651", 00:13:18.657 "strip_size_kb": 64, 00:13:18.657 "state": "online", 00:13:18.657 "raid_level": "raid0", 00:13:18.657 "superblock": true, 00:13:18.657 "num_base_bdevs": 3, 00:13:18.657 "num_base_bdevs_discovered": 3, 00:13:18.657 "num_base_bdevs_operational": 3, 00:13:18.657 "base_bdevs_list": [ 00:13:18.657 { 00:13:18.657 "name": "BaseBdev1", 00:13:18.657 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:18.657 "is_configured": true, 00:13:18.657 "data_offset": 2048, 00:13:18.657 "data_size": 63488 00:13:18.657 }, 00:13:18.657 { 00:13:18.657 "name": "BaseBdev2", 00:13:18.657 "uuid": "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72", 00:13:18.657 "is_configured": true, 00:13:18.657 "data_offset": 2048, 00:13:18.657 "data_size": 63488 00:13:18.657 }, 00:13:18.657 { 00:13:18.657 "name": "BaseBdev3", 00:13:18.657 "uuid": "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22", 00:13:18.657 "is_configured": true, 00:13:18.657 "data_offset": 2048, 00:13:18.657 "data_size": 63488 00:13:18.657 } 00:13:18.657 ] 00:13:18.657 }' 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.657 11:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.591 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:19.591 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:19.591 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:19.591 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:19.591 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:19.592 [2024-07-25 11:55:05.548723] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:19.592 "name": "Existed_Raid", 00:13:19.592 "aliases": [ 00:13:19.592 "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651" 00:13:19.592 ], 00:13:19.592 "product_name": "Raid Volume", 00:13:19.592 "block_size": 512, 00:13:19.592 "num_blocks": 190464, 00:13:19.592 "uuid": "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651", 00:13:19.592 "assigned_rate_limits": { 00:13:19.592 "rw_ios_per_sec": 0, 00:13:19.592 "rw_mbytes_per_sec": 0, 00:13:19.592 "r_mbytes_per_sec": 0, 00:13:19.592 "w_mbytes_per_sec": 0 00:13:19.592 }, 00:13:19.592 "claimed": false, 00:13:19.592 "zoned": false, 00:13:19.592 "supported_io_types": { 00:13:19.592 "read": true, 00:13:19.592 "write": true, 00:13:19.592 "unmap": true, 00:13:19.592 "flush": true, 00:13:19.592 "reset": true, 00:13:19.592 "nvme_admin": false, 00:13:19.592 "nvme_io": false, 00:13:19.592 "nvme_io_md": false, 00:13:19.592 "write_zeroes": true, 00:13:19.592 "zcopy": false, 00:13:19.592 "get_zone_info": false, 00:13:19.592 "zone_management": false, 00:13:19.592 "zone_append": false, 00:13:19.592 "compare": false, 00:13:19.592 "compare_and_write": false, 00:13:19.592 "abort": false, 00:13:19.592 "seek_hole": false, 00:13:19.592 "seek_data": false, 00:13:19.592 "copy": false, 00:13:19.592 "nvme_iov_md": false 00:13:19.592 }, 00:13:19.592 "memory_domains": [ 00:13:19.592 { 00:13:19.592 "dma_device_id": "system", 00:13:19.592 "dma_device_type": 1 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.592 "dma_device_type": 2 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "dma_device_id": "system", 00:13:19.592 "dma_device_type": 1 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.592 "dma_device_type": 2 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "dma_device_id": "system", 00:13:19.592 "dma_device_type": 1 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.592 "dma_device_type": 2 00:13:19.592 } 00:13:19.592 ], 00:13:19.592 "driver_specific": { 00:13:19.592 "raid": { 00:13:19.592 "uuid": "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651", 00:13:19.592 "strip_size_kb": 64, 00:13:19.592 "state": "online", 00:13:19.592 "raid_level": "raid0", 00:13:19.592 "superblock": true, 00:13:19.592 "num_base_bdevs": 3, 00:13:19.592 "num_base_bdevs_discovered": 3, 00:13:19.592 "num_base_bdevs_operational": 3, 00:13:19.592 "base_bdevs_list": [ 00:13:19.592 { 00:13:19.592 "name": "BaseBdev1", 00:13:19.592 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:19.592 "is_configured": true, 00:13:19.592 "data_offset": 2048, 00:13:19.592 "data_size": 63488 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "name": "BaseBdev2", 00:13:19.592 "uuid": "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72", 00:13:19.592 "is_configured": true, 00:13:19.592 "data_offset": 2048, 00:13:19.592 "data_size": 63488 00:13:19.592 }, 00:13:19.592 { 00:13:19.592 "name": "BaseBdev3", 00:13:19.592 "uuid": "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22", 00:13:19.592 "is_configured": true, 00:13:19.592 "data_offset": 2048, 00:13:19.592 "data_size": 63488 00:13:19.592 } 00:13:19.592 ] 00:13:19.592 } 00:13:19.592 } 00:13:19.592 }' 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:19.592 BaseBdev2 00:13:19.592 BaseBdev3' 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:19.592 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.850 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.850 "name": "BaseBdev1", 00:13:19.850 "aliases": [ 00:13:19.850 "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef" 00:13:19.850 ], 00:13:19.850 "product_name": "Malloc disk", 00:13:19.850 "block_size": 512, 00:13:19.850 "num_blocks": 65536, 00:13:19.850 "uuid": "8f4a6eac-5ef3-4bbf-91af-ee4040cc89ef", 00:13:19.850 "assigned_rate_limits": { 00:13:19.850 "rw_ios_per_sec": 0, 00:13:19.850 "rw_mbytes_per_sec": 0, 00:13:19.850 "r_mbytes_per_sec": 0, 00:13:19.850 "w_mbytes_per_sec": 0 00:13:19.850 }, 00:13:19.850 "claimed": true, 00:13:19.850 "claim_type": "exclusive_write", 00:13:19.850 "zoned": false, 00:13:19.850 "supported_io_types": { 00:13:19.850 "read": true, 00:13:19.850 "write": true, 00:13:19.850 "unmap": true, 00:13:19.850 "flush": true, 00:13:19.850 "reset": true, 00:13:19.850 "nvme_admin": false, 00:13:19.850 "nvme_io": false, 00:13:19.850 "nvme_io_md": false, 00:13:19.850 "write_zeroes": true, 00:13:19.850 "zcopy": true, 00:13:19.850 "get_zone_info": false, 00:13:19.850 "zone_management": false, 00:13:19.850 "zone_append": false, 00:13:19.850 "compare": false, 00:13:19.850 "compare_and_write": false, 00:13:19.850 "abort": true, 00:13:19.850 "seek_hole": false, 00:13:19.850 "seek_data": false, 00:13:19.850 "copy": true, 00:13:19.850 "nvme_iov_md": false 00:13:19.850 }, 00:13:19.850 "memory_domains": [ 00:13:19.850 { 00:13:19.850 "dma_device_id": "system", 00:13:19.850 "dma_device_type": 1 00:13:19.850 }, 00:13:19.850 { 00:13:19.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.850 "dma_device_type": 2 00:13:19.850 } 00:13:19.850 ], 00:13:19.850 "driver_specific": {} 00:13:19.850 }' 00:13:19.850 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.850 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.850 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.851 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.109 11:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.109 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:20.367 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.367 "name": "BaseBdev2", 00:13:20.368 "aliases": [ 00:13:20.368 "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72" 00:13:20.368 ], 00:13:20.368 "product_name": "Malloc disk", 00:13:20.368 "block_size": 512, 00:13:20.368 "num_blocks": 65536, 00:13:20.368 "uuid": "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72", 00:13:20.368 "assigned_rate_limits": { 00:13:20.368 "rw_ios_per_sec": 0, 00:13:20.368 "rw_mbytes_per_sec": 0, 00:13:20.368 "r_mbytes_per_sec": 0, 00:13:20.368 "w_mbytes_per_sec": 0 00:13:20.368 }, 00:13:20.368 "claimed": true, 00:13:20.368 "claim_type": "exclusive_write", 00:13:20.368 "zoned": false, 00:13:20.368 "supported_io_types": { 00:13:20.368 "read": true, 00:13:20.368 "write": true, 00:13:20.368 "unmap": true, 00:13:20.368 "flush": true, 00:13:20.368 "reset": true, 00:13:20.368 "nvme_admin": false, 00:13:20.368 "nvme_io": false, 00:13:20.368 "nvme_io_md": false, 00:13:20.368 "write_zeroes": true, 00:13:20.368 "zcopy": true, 00:13:20.368 "get_zone_info": false, 00:13:20.368 "zone_management": false, 00:13:20.368 "zone_append": false, 00:13:20.368 "compare": false, 00:13:20.368 "compare_and_write": false, 00:13:20.368 "abort": true, 00:13:20.368 "seek_hole": false, 00:13:20.368 "seek_data": false, 00:13:20.368 "copy": true, 00:13:20.368 "nvme_iov_md": false 00:13:20.368 }, 00:13:20.368 "memory_domains": [ 00:13:20.368 { 00:13:20.368 "dma_device_id": "system", 00:13:20.368 "dma_device_type": 1 00:13:20.368 }, 00:13:20.368 { 00:13:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.368 "dma_device_type": 2 00:13:20.368 } 00:13:20.368 ], 00:13:20.368 "driver_specific": {} 00:13:20.368 }' 00:13:20.368 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.368 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.626 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.885 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.885 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.885 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:20.885 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.885 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.885 "name": "BaseBdev3", 00:13:20.885 "aliases": [ 00:13:20.885 "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22" 00:13:20.885 ], 00:13:20.885 "product_name": "Malloc disk", 00:13:20.885 "block_size": 512, 00:13:20.885 "num_blocks": 65536, 00:13:20.885 "uuid": "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22", 00:13:20.885 "assigned_rate_limits": { 00:13:20.885 "rw_ios_per_sec": 0, 00:13:20.885 "rw_mbytes_per_sec": 0, 00:13:20.885 "r_mbytes_per_sec": 0, 00:13:20.885 "w_mbytes_per_sec": 0 00:13:20.885 }, 00:13:20.885 "claimed": true, 00:13:20.885 "claim_type": "exclusive_write", 00:13:20.885 "zoned": false, 00:13:20.885 "supported_io_types": { 00:13:20.885 "read": true, 00:13:20.885 "write": true, 00:13:20.885 "unmap": true, 00:13:20.885 "flush": true, 00:13:20.885 "reset": true, 00:13:20.885 "nvme_admin": false, 00:13:20.885 "nvme_io": false, 00:13:20.885 "nvme_io_md": false, 00:13:20.885 "write_zeroes": true, 00:13:20.885 "zcopy": true, 00:13:20.885 "get_zone_info": false, 00:13:20.885 "zone_management": false, 00:13:20.885 "zone_append": false, 00:13:20.885 "compare": false, 00:13:20.885 "compare_and_write": false, 00:13:20.885 "abort": true, 00:13:20.885 "seek_hole": false, 00:13:20.885 "seek_data": false, 00:13:20.885 "copy": true, 00:13:20.885 "nvme_iov_md": false 00:13:20.885 }, 00:13:20.885 "memory_domains": [ 00:13:20.885 { 00:13:20.885 "dma_device_id": "system", 00:13:20.885 "dma_device_type": 1 00:13:20.885 }, 00:13:20.885 { 00:13:20.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.885 "dma_device_type": 2 00:13:20.885 } 00:13:20.885 ], 00:13:20.885 "driver_specific": {} 00:13:20.885 }' 00:13:20.885 11:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.143 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.401 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.401 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.401 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.401 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:21.659 [2024-07-25 11:55:07.553805] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.659 [2024-07-25 11:55:07.553828] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.659 [2024-07-25 11:55:07.553864] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:21.659 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.660 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.918 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.918 "name": "Existed_Raid", 00:13:21.918 "uuid": "cffc7b6e-fe93-4c85-805b-3cfbf8e9b651", 00:13:21.918 "strip_size_kb": 64, 00:13:21.918 "state": "offline", 00:13:21.918 "raid_level": "raid0", 00:13:21.918 "superblock": true, 00:13:21.918 "num_base_bdevs": 3, 00:13:21.918 "num_base_bdevs_discovered": 2, 00:13:21.918 "num_base_bdevs_operational": 2, 00:13:21.918 "base_bdevs_list": [ 00:13:21.918 { 00:13:21.918 "name": null, 00:13:21.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.918 "is_configured": false, 00:13:21.918 "data_offset": 2048, 00:13:21.918 "data_size": 63488 00:13:21.918 }, 00:13:21.918 { 00:13:21.918 "name": "BaseBdev2", 00:13:21.918 "uuid": "dbb5cf04-63ad-4fad-9ad5-1b7d81415d72", 00:13:21.918 "is_configured": true, 00:13:21.918 "data_offset": 2048, 00:13:21.918 "data_size": 63488 00:13:21.918 }, 00:13:21.918 { 00:13:21.918 "name": "BaseBdev3", 00:13:21.918 "uuid": "a30ce1b8-ae7e-4247-9b47-f3bc0adafb22", 00:13:21.918 "is_configured": true, 00:13:21.918 "data_offset": 2048, 00:13:21.918 "data_size": 63488 00:13:21.918 } 00:13:21.918 ] 00:13:21.918 }' 00:13:21.918 11:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.918 11:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.485 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:22.485 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:22.485 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.485 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:22.744 [2024-07-25 11:55:08.830196] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.744 11:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:23.002 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:23.002 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.002 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:23.262 [2024-07-25 11:55:09.297553] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:23.262 [2024-07-25 11:55:09.297592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x174c700 name Existed_Raid, state offline 00:13:23.262 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:23.262 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:23.262 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.262 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:23.521 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:23.521 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:23.522 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:23.522 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:23.522 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:23.522 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.781 BaseBdev2 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.781 11:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.041 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.300 [ 00:13:24.300 { 00:13:24.300 "name": "BaseBdev2", 00:13:24.300 "aliases": [ 00:13:24.300 "270c038a-43cd-460d-83d3-d0a98ba3615b" 00:13:24.300 ], 00:13:24.300 "product_name": "Malloc disk", 00:13:24.300 "block_size": 512, 00:13:24.300 "num_blocks": 65536, 00:13:24.300 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:24.300 "assigned_rate_limits": { 00:13:24.300 "rw_ios_per_sec": 0, 00:13:24.300 "rw_mbytes_per_sec": 0, 00:13:24.300 "r_mbytes_per_sec": 0, 00:13:24.300 "w_mbytes_per_sec": 0 00:13:24.300 }, 00:13:24.300 "claimed": false, 00:13:24.300 "zoned": false, 00:13:24.300 "supported_io_types": { 00:13:24.300 "read": true, 00:13:24.300 "write": true, 00:13:24.300 "unmap": true, 00:13:24.300 "flush": true, 00:13:24.300 "reset": true, 00:13:24.300 "nvme_admin": false, 00:13:24.300 "nvme_io": false, 00:13:24.300 "nvme_io_md": false, 00:13:24.300 "write_zeroes": true, 00:13:24.300 "zcopy": true, 00:13:24.300 "get_zone_info": false, 00:13:24.300 "zone_management": false, 00:13:24.300 "zone_append": false, 00:13:24.300 "compare": false, 00:13:24.300 "compare_and_write": false, 00:13:24.300 "abort": true, 00:13:24.300 "seek_hole": false, 00:13:24.300 "seek_data": false, 00:13:24.300 "copy": true, 00:13:24.300 "nvme_iov_md": false 00:13:24.300 }, 00:13:24.300 "memory_domains": [ 00:13:24.300 { 00:13:24.300 "dma_device_id": "system", 00:13:24.300 "dma_device_type": 1 00:13:24.300 }, 00:13:24.300 { 00:13:24.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.300 "dma_device_type": 2 00:13:24.300 } 00:13:24.300 ], 00:13:24.300 "driver_specific": {} 00:13:24.300 } 00:13:24.300 ] 00:13:24.300 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:24.300 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:24.300 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:24.300 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:24.559 BaseBdev3 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:24.559 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.818 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:24.818 [ 00:13:24.818 { 00:13:24.818 "name": "BaseBdev3", 00:13:24.818 "aliases": [ 00:13:24.818 "d5078fcb-ff5d-4e89-98a5-f8397a890e8c" 00:13:24.818 ], 00:13:24.818 "product_name": "Malloc disk", 00:13:24.818 "block_size": 512, 00:13:24.818 "num_blocks": 65536, 00:13:24.818 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:24.818 "assigned_rate_limits": { 00:13:24.819 "rw_ios_per_sec": 0, 00:13:24.819 "rw_mbytes_per_sec": 0, 00:13:24.819 "r_mbytes_per_sec": 0, 00:13:24.819 "w_mbytes_per_sec": 0 00:13:24.819 }, 00:13:24.819 "claimed": false, 00:13:24.819 "zoned": false, 00:13:24.819 "supported_io_types": { 00:13:24.819 "read": true, 00:13:24.819 "write": true, 00:13:24.819 "unmap": true, 00:13:24.819 "flush": true, 00:13:24.819 "reset": true, 00:13:24.819 "nvme_admin": false, 00:13:24.819 "nvme_io": false, 00:13:24.819 "nvme_io_md": false, 00:13:24.819 "write_zeroes": true, 00:13:24.819 "zcopy": true, 00:13:24.819 "get_zone_info": false, 00:13:24.819 "zone_management": false, 00:13:24.819 "zone_append": false, 00:13:24.819 "compare": false, 00:13:24.819 "compare_and_write": false, 00:13:24.819 "abort": true, 00:13:24.819 "seek_hole": false, 00:13:24.819 "seek_data": false, 00:13:24.819 "copy": true, 00:13:24.819 "nvme_iov_md": false 00:13:24.819 }, 00:13:24.819 "memory_domains": [ 00:13:24.819 { 00:13:24.819 "dma_device_id": "system", 00:13:24.819 "dma_device_type": 1 00:13:24.819 }, 00:13:24.819 { 00:13:24.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.819 "dma_device_type": 2 00:13:24.819 } 00:13:24.819 ], 00:13:24.819 "driver_specific": {} 00:13:24.819 } 00:13:24.819 ] 00:13:24.819 11:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:24.819 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:24.819 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:24.819 11:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:25.078 [2024-07-25 11:55:11.121296] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.078 [2024-07-25 11:55:11.121331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.078 [2024-07-25 11:55:11.121348] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.078 [2024-07-25 11:55:11.122561] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.078 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.337 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.337 "name": "Existed_Raid", 00:13:25.337 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:25.337 "strip_size_kb": 64, 00:13:25.337 "state": "configuring", 00:13:25.337 "raid_level": "raid0", 00:13:25.337 "superblock": true, 00:13:25.337 "num_base_bdevs": 3, 00:13:25.337 "num_base_bdevs_discovered": 2, 00:13:25.337 "num_base_bdevs_operational": 3, 00:13:25.337 "base_bdevs_list": [ 00:13:25.337 { 00:13:25.337 "name": "BaseBdev1", 00:13:25.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.337 "is_configured": false, 00:13:25.337 "data_offset": 0, 00:13:25.337 "data_size": 0 00:13:25.337 }, 00:13:25.337 { 00:13:25.337 "name": "BaseBdev2", 00:13:25.337 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:25.337 "is_configured": true, 00:13:25.337 "data_offset": 2048, 00:13:25.337 "data_size": 63488 00:13:25.337 }, 00:13:25.338 { 00:13:25.338 "name": "BaseBdev3", 00:13:25.338 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:25.338 "is_configured": true, 00:13:25.338 "data_offset": 2048, 00:13:25.338 "data_size": 63488 00:13:25.338 } 00:13:25.338 ] 00:13:25.338 }' 00:13:25.338 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.338 11:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.906 11:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:26.165 [2024-07-25 11:55:12.140109] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.165 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.424 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.424 "name": "Existed_Raid", 00:13:26.424 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:26.424 "strip_size_kb": 64, 00:13:26.424 "state": "configuring", 00:13:26.424 "raid_level": "raid0", 00:13:26.424 "superblock": true, 00:13:26.424 "num_base_bdevs": 3, 00:13:26.424 "num_base_bdevs_discovered": 1, 00:13:26.424 "num_base_bdevs_operational": 3, 00:13:26.424 "base_bdevs_list": [ 00:13:26.424 { 00:13:26.424 "name": "BaseBdev1", 00:13:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.424 "is_configured": false, 00:13:26.424 "data_offset": 0, 00:13:26.424 "data_size": 0 00:13:26.424 }, 00:13:26.424 { 00:13:26.424 "name": null, 00:13:26.424 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:26.424 "is_configured": false, 00:13:26.424 "data_offset": 2048, 00:13:26.424 "data_size": 63488 00:13:26.424 }, 00:13:26.424 { 00:13:26.424 "name": "BaseBdev3", 00:13:26.424 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:26.424 "is_configured": true, 00:13:26.424 "data_offset": 2048, 00:13:26.424 "data_size": 63488 00:13:26.424 } 00:13:26.424 ] 00:13:26.424 }' 00:13:26.424 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.424 11:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.991 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.991 11:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.249 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:27.249 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.506 [2024-07-25 11:55:13.406631] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.506 BaseBdev1 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.506 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.763 [ 00:13:27.763 { 00:13:27.763 "name": "BaseBdev1", 00:13:27.763 "aliases": [ 00:13:27.763 "4152a7b9-cfc4-4058-81f2-e88e18ecf88c" 00:13:27.763 ], 00:13:27.763 "product_name": "Malloc disk", 00:13:27.763 "block_size": 512, 00:13:27.763 "num_blocks": 65536, 00:13:27.763 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:27.763 "assigned_rate_limits": { 00:13:27.763 "rw_ios_per_sec": 0, 00:13:27.763 "rw_mbytes_per_sec": 0, 00:13:27.763 "r_mbytes_per_sec": 0, 00:13:27.763 "w_mbytes_per_sec": 0 00:13:27.763 }, 00:13:27.763 "claimed": true, 00:13:27.763 "claim_type": "exclusive_write", 00:13:27.763 "zoned": false, 00:13:27.763 "supported_io_types": { 00:13:27.763 "read": true, 00:13:27.763 "write": true, 00:13:27.763 "unmap": true, 00:13:27.763 "flush": true, 00:13:27.763 "reset": true, 00:13:27.763 "nvme_admin": false, 00:13:27.763 "nvme_io": false, 00:13:27.763 "nvme_io_md": false, 00:13:27.763 "write_zeroes": true, 00:13:27.763 "zcopy": true, 00:13:27.763 "get_zone_info": false, 00:13:27.763 "zone_management": false, 00:13:27.763 "zone_append": false, 00:13:27.763 "compare": false, 00:13:27.763 "compare_and_write": false, 00:13:27.763 "abort": true, 00:13:27.763 "seek_hole": false, 00:13:27.763 "seek_data": false, 00:13:27.763 "copy": true, 00:13:27.763 "nvme_iov_md": false 00:13:27.763 }, 00:13:27.763 "memory_domains": [ 00:13:27.763 { 00:13:27.763 "dma_device_id": "system", 00:13:27.763 "dma_device_type": 1 00:13:27.763 }, 00:13:27.763 { 00:13:27.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.763 "dma_device_type": 2 00:13:27.763 } 00:13:27.763 ], 00:13:27.763 "driver_specific": {} 00:13:27.763 } 00:13:27.763 ] 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.763 11:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.020 11:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.020 "name": "Existed_Raid", 00:13:28.020 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:28.020 "strip_size_kb": 64, 00:13:28.020 "state": "configuring", 00:13:28.020 "raid_level": "raid0", 00:13:28.020 "superblock": true, 00:13:28.020 "num_base_bdevs": 3, 00:13:28.020 "num_base_bdevs_discovered": 2, 00:13:28.020 "num_base_bdevs_operational": 3, 00:13:28.020 "base_bdevs_list": [ 00:13:28.020 { 00:13:28.020 "name": "BaseBdev1", 00:13:28.020 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:28.020 "is_configured": true, 00:13:28.020 "data_offset": 2048, 00:13:28.020 "data_size": 63488 00:13:28.020 }, 00:13:28.020 { 00:13:28.020 "name": null, 00:13:28.020 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:28.020 "is_configured": false, 00:13:28.020 "data_offset": 2048, 00:13:28.020 "data_size": 63488 00:13:28.020 }, 00:13:28.020 { 00:13:28.020 "name": "BaseBdev3", 00:13:28.021 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:28.021 "is_configured": true, 00:13:28.021 "data_offset": 2048, 00:13:28.021 "data_size": 63488 00:13:28.021 } 00:13:28.021 ] 00:13:28.021 }' 00:13:28.021 11:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.021 11:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.586 11:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.586 11:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.845 11:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:28.845 11:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:29.102 [2024-07-25 11:55:15.103109] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.102 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.361 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.361 "name": "Existed_Raid", 00:13:29.361 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:29.361 "strip_size_kb": 64, 00:13:29.361 "state": "configuring", 00:13:29.361 "raid_level": "raid0", 00:13:29.361 "superblock": true, 00:13:29.361 "num_base_bdevs": 3, 00:13:29.361 "num_base_bdevs_discovered": 1, 00:13:29.361 "num_base_bdevs_operational": 3, 00:13:29.361 "base_bdevs_list": [ 00:13:29.361 { 00:13:29.361 "name": "BaseBdev1", 00:13:29.361 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:29.361 "is_configured": true, 00:13:29.361 "data_offset": 2048, 00:13:29.361 "data_size": 63488 00:13:29.361 }, 00:13:29.361 { 00:13:29.361 "name": null, 00:13:29.361 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:29.361 "is_configured": false, 00:13:29.361 "data_offset": 2048, 00:13:29.361 "data_size": 63488 00:13:29.361 }, 00:13:29.361 { 00:13:29.361 "name": null, 00:13:29.361 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:29.361 "is_configured": false, 00:13:29.361 "data_offset": 2048, 00:13:29.361 "data_size": 63488 00:13:29.361 } 00:13:29.361 ] 00:13:29.361 }' 00:13:29.361 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.361 11:55:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.925 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.925 11:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:30.243 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:30.243 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:30.502 [2024-07-25 11:55:16.354437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.502 "name": "Existed_Raid", 00:13:30.502 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:30.502 "strip_size_kb": 64, 00:13:30.502 "state": "configuring", 00:13:30.502 "raid_level": "raid0", 00:13:30.502 "superblock": true, 00:13:30.502 "num_base_bdevs": 3, 00:13:30.502 "num_base_bdevs_discovered": 2, 00:13:30.502 "num_base_bdevs_operational": 3, 00:13:30.502 "base_bdevs_list": [ 00:13:30.502 { 00:13:30.502 "name": "BaseBdev1", 00:13:30.502 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:30.502 "is_configured": true, 00:13:30.502 "data_offset": 2048, 00:13:30.502 "data_size": 63488 00:13:30.502 }, 00:13:30.502 { 00:13:30.502 "name": null, 00:13:30.502 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:30.502 "is_configured": false, 00:13:30.502 "data_offset": 2048, 00:13:30.502 "data_size": 63488 00:13:30.502 }, 00:13:30.502 { 00:13:30.502 "name": "BaseBdev3", 00:13:30.502 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:30.502 "is_configured": true, 00:13:30.502 "data_offset": 2048, 00:13:30.502 "data_size": 63488 00:13:30.502 } 00:13:30.502 ] 00:13:30.502 }' 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.502 11:55:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.069 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.069 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:31.327 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:31.327 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:31.586 [2024-07-25 11:55:17.621793] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.586 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.844 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.844 "name": "Existed_Raid", 00:13:31.844 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:31.844 "strip_size_kb": 64, 00:13:31.844 "state": "configuring", 00:13:31.844 "raid_level": "raid0", 00:13:31.844 "superblock": true, 00:13:31.844 "num_base_bdevs": 3, 00:13:31.844 "num_base_bdevs_discovered": 1, 00:13:31.844 "num_base_bdevs_operational": 3, 00:13:31.844 "base_bdevs_list": [ 00:13:31.844 { 00:13:31.844 "name": null, 00:13:31.844 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:31.844 "is_configured": false, 00:13:31.844 "data_offset": 2048, 00:13:31.844 "data_size": 63488 00:13:31.844 }, 00:13:31.844 { 00:13:31.844 "name": null, 00:13:31.844 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:31.844 "is_configured": false, 00:13:31.844 "data_offset": 2048, 00:13:31.844 "data_size": 63488 00:13:31.844 }, 00:13:31.844 { 00:13:31.844 "name": "BaseBdev3", 00:13:31.844 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:31.844 "is_configured": true, 00:13:31.844 "data_offset": 2048, 00:13:31.844 "data_size": 63488 00:13:31.844 } 00:13:31.844 ] 00:13:31.844 }' 00:13:31.844 11:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.844 11:55:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.424 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.424 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.682 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:32.682 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:32.941 [2024-07-25 11:55:18.866996] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.941 11:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.199 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.199 "name": "Existed_Raid", 00:13:33.199 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:33.199 "strip_size_kb": 64, 00:13:33.199 "state": "configuring", 00:13:33.199 "raid_level": "raid0", 00:13:33.199 "superblock": true, 00:13:33.199 "num_base_bdevs": 3, 00:13:33.199 "num_base_bdevs_discovered": 2, 00:13:33.199 "num_base_bdevs_operational": 3, 00:13:33.199 "base_bdevs_list": [ 00:13:33.199 { 00:13:33.199 "name": null, 00:13:33.199 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:33.199 "is_configured": false, 00:13:33.199 "data_offset": 2048, 00:13:33.199 "data_size": 63488 00:13:33.199 }, 00:13:33.199 { 00:13:33.199 "name": "BaseBdev2", 00:13:33.199 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:33.199 "is_configured": true, 00:13:33.199 "data_offset": 2048, 00:13:33.199 "data_size": 63488 00:13:33.199 }, 00:13:33.199 { 00:13:33.199 "name": "BaseBdev3", 00:13:33.199 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:33.199 "is_configured": true, 00:13:33.199 "data_offset": 2048, 00:13:33.199 "data_size": 63488 00:13:33.199 } 00:13:33.199 ] 00:13:33.199 }' 00:13:33.199 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.199 11:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.764 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.764 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.022 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:34.022 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.022 11:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4152a7b9-cfc4-4058-81f2-e88e18ecf88c 00:13:34.326 [2024-07-25 11:55:20.362292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:34.326 [2024-07-25 11:55:20.362426] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x18f03e0 00:13:34.326 [2024-07-25 11:55:20.362439] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:34.326 [2024-07-25 11:55:20.362596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18fa190 00:13:34.326 [2024-07-25 11:55:20.362699] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18f03e0 00:13:34.326 [2024-07-25 11:55:20.362708] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18f03e0 00:13:34.326 [2024-07-25 11:55:20.362791] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.326 NewBaseBdev 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:34.326 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:34.585 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:34.843 [ 00:13:34.843 { 00:13:34.843 "name": "NewBaseBdev", 00:13:34.843 "aliases": [ 00:13:34.843 "4152a7b9-cfc4-4058-81f2-e88e18ecf88c" 00:13:34.843 ], 00:13:34.844 "product_name": "Malloc disk", 00:13:34.844 "block_size": 512, 00:13:34.844 "num_blocks": 65536, 00:13:34.844 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:34.844 "assigned_rate_limits": { 00:13:34.844 "rw_ios_per_sec": 0, 00:13:34.844 "rw_mbytes_per_sec": 0, 00:13:34.844 "r_mbytes_per_sec": 0, 00:13:34.844 "w_mbytes_per_sec": 0 00:13:34.844 }, 00:13:34.844 "claimed": true, 00:13:34.844 "claim_type": "exclusive_write", 00:13:34.844 "zoned": false, 00:13:34.844 "supported_io_types": { 00:13:34.844 "read": true, 00:13:34.844 "write": true, 00:13:34.844 "unmap": true, 00:13:34.844 "flush": true, 00:13:34.844 "reset": true, 00:13:34.844 "nvme_admin": false, 00:13:34.844 "nvme_io": false, 00:13:34.844 "nvme_io_md": false, 00:13:34.844 "write_zeroes": true, 00:13:34.844 "zcopy": true, 00:13:34.844 "get_zone_info": false, 00:13:34.844 "zone_management": false, 00:13:34.844 "zone_append": false, 00:13:34.844 "compare": false, 00:13:34.844 "compare_and_write": false, 00:13:34.844 "abort": true, 00:13:34.844 "seek_hole": false, 00:13:34.844 "seek_data": false, 00:13:34.844 "copy": true, 00:13:34.844 "nvme_iov_md": false 00:13:34.844 }, 00:13:34.844 "memory_domains": [ 00:13:34.844 { 00:13:34.844 "dma_device_id": "system", 00:13:34.844 "dma_device_type": 1 00:13:34.844 }, 00:13:34.844 { 00:13:34.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.844 "dma_device_type": 2 00:13:34.844 } 00:13:34.844 ], 00:13:34.844 "driver_specific": {} 00:13:34.844 } 00:13:34.844 ] 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.844 11:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.102 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.102 "name": "Existed_Raid", 00:13:35.102 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:35.102 "strip_size_kb": 64, 00:13:35.102 "state": "online", 00:13:35.102 "raid_level": "raid0", 00:13:35.102 "superblock": true, 00:13:35.102 "num_base_bdevs": 3, 00:13:35.102 "num_base_bdevs_discovered": 3, 00:13:35.102 "num_base_bdevs_operational": 3, 00:13:35.102 "base_bdevs_list": [ 00:13:35.102 { 00:13:35.102 "name": "NewBaseBdev", 00:13:35.102 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:35.102 "is_configured": true, 00:13:35.103 "data_offset": 2048, 00:13:35.103 "data_size": 63488 00:13:35.103 }, 00:13:35.103 { 00:13:35.103 "name": "BaseBdev2", 00:13:35.103 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:35.103 "is_configured": true, 00:13:35.103 "data_offset": 2048, 00:13:35.103 "data_size": 63488 00:13:35.103 }, 00:13:35.103 { 00:13:35.103 "name": "BaseBdev3", 00:13:35.103 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:35.103 "is_configured": true, 00:13:35.103 "data_offset": 2048, 00:13:35.103 "data_size": 63488 00:13:35.103 } 00:13:35.103 ] 00:13:35.103 }' 00:13:35.103 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.103 11:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:35.669 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:35.927 [2024-07-25 11:55:21.822426] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.927 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:35.928 "name": "Existed_Raid", 00:13:35.928 "aliases": [ 00:13:35.928 "db552272-5e87-48fd-93e9-3a7cee0d9ecb" 00:13:35.928 ], 00:13:35.928 "product_name": "Raid Volume", 00:13:35.928 "block_size": 512, 00:13:35.928 "num_blocks": 190464, 00:13:35.928 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:35.928 "assigned_rate_limits": { 00:13:35.928 "rw_ios_per_sec": 0, 00:13:35.928 "rw_mbytes_per_sec": 0, 00:13:35.928 "r_mbytes_per_sec": 0, 00:13:35.928 "w_mbytes_per_sec": 0 00:13:35.928 }, 00:13:35.928 "claimed": false, 00:13:35.928 "zoned": false, 00:13:35.928 "supported_io_types": { 00:13:35.928 "read": true, 00:13:35.928 "write": true, 00:13:35.928 "unmap": true, 00:13:35.928 "flush": true, 00:13:35.928 "reset": true, 00:13:35.928 "nvme_admin": false, 00:13:35.928 "nvme_io": false, 00:13:35.928 "nvme_io_md": false, 00:13:35.928 "write_zeroes": true, 00:13:35.928 "zcopy": false, 00:13:35.928 "get_zone_info": false, 00:13:35.928 "zone_management": false, 00:13:35.928 "zone_append": false, 00:13:35.928 "compare": false, 00:13:35.928 "compare_and_write": false, 00:13:35.928 "abort": false, 00:13:35.928 "seek_hole": false, 00:13:35.928 "seek_data": false, 00:13:35.928 "copy": false, 00:13:35.928 "nvme_iov_md": false 00:13:35.928 }, 00:13:35.928 "memory_domains": [ 00:13:35.928 { 00:13:35.928 "dma_device_id": "system", 00:13:35.928 "dma_device_type": 1 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.928 "dma_device_type": 2 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "dma_device_id": "system", 00:13:35.928 "dma_device_type": 1 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.928 "dma_device_type": 2 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "dma_device_id": "system", 00:13:35.928 "dma_device_type": 1 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.928 "dma_device_type": 2 00:13:35.928 } 00:13:35.928 ], 00:13:35.928 "driver_specific": { 00:13:35.928 "raid": { 00:13:35.928 "uuid": "db552272-5e87-48fd-93e9-3a7cee0d9ecb", 00:13:35.928 "strip_size_kb": 64, 00:13:35.928 "state": "online", 00:13:35.928 "raid_level": "raid0", 00:13:35.928 "superblock": true, 00:13:35.928 "num_base_bdevs": 3, 00:13:35.928 "num_base_bdevs_discovered": 3, 00:13:35.928 "num_base_bdevs_operational": 3, 00:13:35.928 "base_bdevs_list": [ 00:13:35.928 { 00:13:35.928 "name": "NewBaseBdev", 00:13:35.928 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:35.928 "is_configured": true, 00:13:35.928 "data_offset": 2048, 00:13:35.928 "data_size": 63488 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "name": "BaseBdev2", 00:13:35.928 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:35.928 "is_configured": true, 00:13:35.928 "data_offset": 2048, 00:13:35.928 "data_size": 63488 00:13:35.928 }, 00:13:35.928 { 00:13:35.928 "name": "BaseBdev3", 00:13:35.928 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:35.928 "is_configured": true, 00:13:35.928 "data_offset": 2048, 00:13:35.928 "data_size": 63488 00:13:35.928 } 00:13:35.928 ] 00:13:35.928 } 00:13:35.928 } 00:13:35.928 }' 00:13:35.928 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.928 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:35.928 BaseBdev2 00:13:35.928 BaseBdev3' 00:13:35.928 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:35.928 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:35.928 11:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.186 "name": "NewBaseBdev", 00:13:36.186 "aliases": [ 00:13:36.186 "4152a7b9-cfc4-4058-81f2-e88e18ecf88c" 00:13:36.186 ], 00:13:36.186 "product_name": "Malloc disk", 00:13:36.186 "block_size": 512, 00:13:36.186 "num_blocks": 65536, 00:13:36.186 "uuid": "4152a7b9-cfc4-4058-81f2-e88e18ecf88c", 00:13:36.186 "assigned_rate_limits": { 00:13:36.186 "rw_ios_per_sec": 0, 00:13:36.186 "rw_mbytes_per_sec": 0, 00:13:36.186 "r_mbytes_per_sec": 0, 00:13:36.186 "w_mbytes_per_sec": 0 00:13:36.186 }, 00:13:36.186 "claimed": true, 00:13:36.186 "claim_type": "exclusive_write", 00:13:36.186 "zoned": false, 00:13:36.186 "supported_io_types": { 00:13:36.186 "read": true, 00:13:36.186 "write": true, 00:13:36.186 "unmap": true, 00:13:36.186 "flush": true, 00:13:36.186 "reset": true, 00:13:36.186 "nvme_admin": false, 00:13:36.186 "nvme_io": false, 00:13:36.186 "nvme_io_md": false, 00:13:36.186 "write_zeroes": true, 00:13:36.186 "zcopy": true, 00:13:36.186 "get_zone_info": false, 00:13:36.186 "zone_management": false, 00:13:36.186 "zone_append": false, 00:13:36.186 "compare": false, 00:13:36.186 "compare_and_write": false, 00:13:36.186 "abort": true, 00:13:36.186 "seek_hole": false, 00:13:36.186 "seek_data": false, 00:13:36.186 "copy": true, 00:13:36.186 "nvme_iov_md": false 00:13:36.186 }, 00:13:36.186 "memory_domains": [ 00:13:36.186 { 00:13:36.186 "dma_device_id": "system", 00:13:36.186 "dma_device_type": 1 00:13:36.186 }, 00:13:36.186 { 00:13:36.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.186 "dma_device_type": 2 00:13:36.186 } 00:13:36.186 ], 00:13:36.186 "driver_specific": {} 00:13:36.186 }' 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.186 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:36.444 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.702 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.702 "name": "BaseBdev2", 00:13:36.702 "aliases": [ 00:13:36.702 "270c038a-43cd-460d-83d3-d0a98ba3615b" 00:13:36.702 ], 00:13:36.702 "product_name": "Malloc disk", 00:13:36.702 "block_size": 512, 00:13:36.702 "num_blocks": 65536, 00:13:36.702 "uuid": "270c038a-43cd-460d-83d3-d0a98ba3615b", 00:13:36.702 "assigned_rate_limits": { 00:13:36.702 "rw_ios_per_sec": 0, 00:13:36.702 "rw_mbytes_per_sec": 0, 00:13:36.702 "r_mbytes_per_sec": 0, 00:13:36.702 "w_mbytes_per_sec": 0 00:13:36.702 }, 00:13:36.702 "claimed": true, 00:13:36.702 "claim_type": "exclusive_write", 00:13:36.702 "zoned": false, 00:13:36.702 "supported_io_types": { 00:13:36.702 "read": true, 00:13:36.702 "write": true, 00:13:36.702 "unmap": true, 00:13:36.702 "flush": true, 00:13:36.702 "reset": true, 00:13:36.702 "nvme_admin": false, 00:13:36.702 "nvme_io": false, 00:13:36.702 "nvme_io_md": false, 00:13:36.702 "write_zeroes": true, 00:13:36.702 "zcopy": true, 00:13:36.702 "get_zone_info": false, 00:13:36.702 "zone_management": false, 00:13:36.702 "zone_append": false, 00:13:36.702 "compare": false, 00:13:36.702 "compare_and_write": false, 00:13:36.702 "abort": true, 00:13:36.702 "seek_hole": false, 00:13:36.702 "seek_data": false, 00:13:36.702 "copy": true, 00:13:36.702 "nvme_iov_md": false 00:13:36.702 }, 00:13:36.702 "memory_domains": [ 00:13:36.702 { 00:13:36.702 "dma_device_id": "system", 00:13:36.702 "dma_device_type": 1 00:13:36.702 }, 00:13:36.702 { 00:13:36.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.702 "dma_device_type": 2 00:13:36.702 } 00:13:36.702 ], 00:13:36.702 "driver_specific": {} 00:13:36.702 }' 00:13:36.702 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.702 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.702 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.702 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.702 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.960 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.960 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.960 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.960 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.960 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.960 11:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.960 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.960 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.960 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:36.960 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.217 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.217 "name": "BaseBdev3", 00:13:37.217 "aliases": [ 00:13:37.217 "d5078fcb-ff5d-4e89-98a5-f8397a890e8c" 00:13:37.217 ], 00:13:37.217 "product_name": "Malloc disk", 00:13:37.217 "block_size": 512, 00:13:37.217 "num_blocks": 65536, 00:13:37.217 "uuid": "d5078fcb-ff5d-4e89-98a5-f8397a890e8c", 00:13:37.217 "assigned_rate_limits": { 00:13:37.217 "rw_ios_per_sec": 0, 00:13:37.217 "rw_mbytes_per_sec": 0, 00:13:37.217 "r_mbytes_per_sec": 0, 00:13:37.217 "w_mbytes_per_sec": 0 00:13:37.217 }, 00:13:37.217 "claimed": true, 00:13:37.217 "claim_type": "exclusive_write", 00:13:37.217 "zoned": false, 00:13:37.217 "supported_io_types": { 00:13:37.217 "read": true, 00:13:37.217 "write": true, 00:13:37.217 "unmap": true, 00:13:37.217 "flush": true, 00:13:37.217 "reset": true, 00:13:37.217 "nvme_admin": false, 00:13:37.217 "nvme_io": false, 00:13:37.217 "nvme_io_md": false, 00:13:37.217 "write_zeroes": true, 00:13:37.217 "zcopy": true, 00:13:37.217 "get_zone_info": false, 00:13:37.217 "zone_management": false, 00:13:37.217 "zone_append": false, 00:13:37.217 "compare": false, 00:13:37.217 "compare_and_write": false, 00:13:37.217 "abort": true, 00:13:37.217 "seek_hole": false, 00:13:37.217 "seek_data": false, 00:13:37.217 "copy": true, 00:13:37.217 "nvme_iov_md": false 00:13:37.217 }, 00:13:37.217 "memory_domains": [ 00:13:37.217 { 00:13:37.217 "dma_device_id": "system", 00:13:37.217 "dma_device_type": 1 00:13:37.217 }, 00:13:37.217 { 00:13:37.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.217 "dma_device_type": 2 00:13:37.217 } 00:13:37.217 ], 00:13:37.217 "driver_specific": {} 00:13:37.217 }' 00:13:37.217 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.217 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.217 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.217 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.474 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:37.731 [2024-07-25 11:55:23.791355] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.731 [2024-07-25 11:55:23.791378] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.731 [2024-07-25 11:55:23.791427] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.731 [2024-07-25 11:55:23.791473] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.731 [2024-07-25 11:55:23.791489] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f03e0 name Existed_Raid, state offline 00:13:37.731 11:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4123068 00:13:37.731 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4123068 ']' 00:13:37.731 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4123068 00:13:37.731 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:37.731 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.731 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4123068 00:13:37.989 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.989 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.989 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4123068' 00:13:37.989 killing process with pid 4123068 00:13:37.989 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4123068 00:13:37.989 [2024-07-25 11:55:23.866893] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.989 11:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4123068 00:13:37.989 [2024-07-25 11:55:23.890993] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.989 11:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:37.989 00:13:37.989 real 0m26.712s 00:13:37.989 user 0m48.970s 00:13:37.989 sys 0m4.892s 00:13:37.989 11:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.989 11:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.989 ************************************ 00:13:37.989 END TEST raid_state_function_test_sb 00:13:37.989 ************************************ 00:13:38.247 11:55:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:38.247 11:55:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:38.247 11:55:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.247 11:55:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.247 ************************************ 00:13:38.247 START TEST raid_superblock_test 00:13:38.247 ************************************ 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4128173 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4128173 /var/tmp/spdk-raid.sock 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4128173 ']' 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:38.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.247 11:55:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.247 [2024-07-25 11:55:24.225421] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:38.247 [2024-07-25 11:55:24.225474] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128173 ] 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:38.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.247 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:38.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.248 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:38.248 [2024-07-25 11:55:24.357030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.505 [2024-07-25 11:55:24.443992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.505 [2024-07-25 11:55:24.498742] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.505 [2024-07-25 11:55:24.498769] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:39.068 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:39.325 malloc1 00:13:39.325 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:39.583 [2024-07-25 11:55:25.574260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:39.583 [2024-07-25 11:55:25.574302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.583 [2024-07-25 11:55:25.574318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff22f0 00:13:39.583 [2024-07-25 11:55:25.574329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.583 [2024-07-25 11:55:25.575741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.583 [2024-07-25 11:55:25.575768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:39.583 pt1 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:39.583 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:39.841 malloc2 00:13:39.841 11:55:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.099 [2024-07-25 11:55:26.031823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.099 [2024-07-25 11:55:26.031862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.099 [2024-07-25 11:55:26.031876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff36d0 00:13:40.099 [2024-07-25 11:55:26.031887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.099 [2024-07-25 11:55:26.033278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.099 [2024-07-25 11:55:26.033304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.099 pt2 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.099 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:40.357 malloc3 00:13:40.357 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:40.615 [2024-07-25 11:55:26.493378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:40.615 [2024-07-25 11:55:26.493420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.615 [2024-07-25 11:55:26.493436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x218c6b0 00:13:40.615 [2024-07-25 11:55:26.493447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.615 [2024-07-25 11:55:26.494803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.615 [2024-07-25 11:55:26.494828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:40.615 pt3 00:13:40.615 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:40.615 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:40.615 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:40.615 [2024-07-25 11:55:26.722000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.615 [2024-07-25 11:55:26.723153] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.615 [2024-07-25 11:55:26.723204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.615 [2024-07-25 11:55:26.723340] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x218ccb0 00:13:40.615 [2024-07-25 11:55:26.723355] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:40.615 [2024-07-25 11:55:26.723532] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x218b270 00:13:40.615 [2024-07-25 11:55:26.723664] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x218ccb0 00:13:40.615 [2024-07-25 11:55:26.723674] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x218ccb0 00:13:40.615 [2024-07-25 11:55:26.723761] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.872 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.872 "name": "raid_bdev1", 00:13:40.872 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:40.872 "strip_size_kb": 64, 00:13:40.872 "state": "online", 00:13:40.872 "raid_level": "raid0", 00:13:40.872 "superblock": true, 00:13:40.872 "num_base_bdevs": 3, 00:13:40.872 "num_base_bdevs_discovered": 3, 00:13:40.872 "num_base_bdevs_operational": 3, 00:13:40.873 "base_bdevs_list": [ 00:13:40.873 { 00:13:40.873 "name": "pt1", 00:13:40.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.873 "is_configured": true, 00:13:40.873 "data_offset": 2048, 00:13:40.873 "data_size": 63488 00:13:40.873 }, 00:13:40.873 { 00:13:40.873 "name": "pt2", 00:13:40.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.873 "is_configured": true, 00:13:40.873 "data_offset": 2048, 00:13:40.873 "data_size": 63488 00:13:40.873 }, 00:13:40.873 { 00:13:40.873 "name": "pt3", 00:13:40.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.873 "is_configured": true, 00:13:40.873 "data_offset": 2048, 00:13:40.873 "data_size": 63488 00:13:40.873 } 00:13:40.873 ] 00:13:40.873 }' 00:13:40.873 11:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.873 11:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:41.437 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:41.695 [2024-07-25 11:55:27.748933] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.695 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:41.695 "name": "raid_bdev1", 00:13:41.695 "aliases": [ 00:13:41.695 "f16531c1-20c4-479b-9491-0ce1c795a1f3" 00:13:41.695 ], 00:13:41.695 "product_name": "Raid Volume", 00:13:41.695 "block_size": 512, 00:13:41.695 "num_blocks": 190464, 00:13:41.695 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:41.695 "assigned_rate_limits": { 00:13:41.695 "rw_ios_per_sec": 0, 00:13:41.695 "rw_mbytes_per_sec": 0, 00:13:41.695 "r_mbytes_per_sec": 0, 00:13:41.695 "w_mbytes_per_sec": 0 00:13:41.695 }, 00:13:41.695 "claimed": false, 00:13:41.695 "zoned": false, 00:13:41.695 "supported_io_types": { 00:13:41.695 "read": true, 00:13:41.695 "write": true, 00:13:41.695 "unmap": true, 00:13:41.695 "flush": true, 00:13:41.695 "reset": true, 00:13:41.695 "nvme_admin": false, 00:13:41.695 "nvme_io": false, 00:13:41.695 "nvme_io_md": false, 00:13:41.695 "write_zeroes": true, 00:13:41.695 "zcopy": false, 00:13:41.695 "get_zone_info": false, 00:13:41.695 "zone_management": false, 00:13:41.695 "zone_append": false, 00:13:41.695 "compare": false, 00:13:41.695 "compare_and_write": false, 00:13:41.695 "abort": false, 00:13:41.695 "seek_hole": false, 00:13:41.695 "seek_data": false, 00:13:41.695 "copy": false, 00:13:41.695 "nvme_iov_md": false 00:13:41.695 }, 00:13:41.695 "memory_domains": [ 00:13:41.695 { 00:13:41.695 "dma_device_id": "system", 00:13:41.695 "dma_device_type": 1 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.695 "dma_device_type": 2 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "dma_device_id": "system", 00:13:41.695 "dma_device_type": 1 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.695 "dma_device_type": 2 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "dma_device_id": "system", 00:13:41.695 "dma_device_type": 1 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.695 "dma_device_type": 2 00:13:41.695 } 00:13:41.695 ], 00:13:41.695 "driver_specific": { 00:13:41.695 "raid": { 00:13:41.695 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:41.695 "strip_size_kb": 64, 00:13:41.695 "state": "online", 00:13:41.695 "raid_level": "raid0", 00:13:41.695 "superblock": true, 00:13:41.695 "num_base_bdevs": 3, 00:13:41.695 "num_base_bdevs_discovered": 3, 00:13:41.695 "num_base_bdevs_operational": 3, 00:13:41.695 "base_bdevs_list": [ 00:13:41.695 { 00:13:41.695 "name": "pt1", 00:13:41.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.695 "is_configured": true, 00:13:41.695 "data_offset": 2048, 00:13:41.695 "data_size": 63488 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "name": "pt2", 00:13:41.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.695 "is_configured": true, 00:13:41.695 "data_offset": 2048, 00:13:41.695 "data_size": 63488 00:13:41.695 }, 00:13:41.695 { 00:13:41.695 "name": "pt3", 00:13:41.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.695 "is_configured": true, 00:13:41.695 "data_offset": 2048, 00:13:41.695 "data_size": 63488 00:13:41.695 } 00:13:41.695 ] 00:13:41.695 } 00:13:41.695 } 00:13:41.695 }' 00:13:41.695 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.953 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:41.953 pt2 00:13:41.953 pt3' 00:13:41.953 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.953 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:41.953 11:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.953 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.953 "name": "pt1", 00:13:41.953 "aliases": [ 00:13:41.953 "00000000-0000-0000-0000-000000000001" 00:13:41.953 ], 00:13:41.953 "product_name": "passthru", 00:13:41.953 "block_size": 512, 00:13:41.953 "num_blocks": 65536, 00:13:41.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.953 "assigned_rate_limits": { 00:13:41.953 "rw_ios_per_sec": 0, 00:13:41.953 "rw_mbytes_per_sec": 0, 00:13:41.953 "r_mbytes_per_sec": 0, 00:13:41.953 "w_mbytes_per_sec": 0 00:13:41.953 }, 00:13:41.953 "claimed": true, 00:13:41.953 "claim_type": "exclusive_write", 00:13:41.953 "zoned": false, 00:13:41.953 "supported_io_types": { 00:13:41.953 "read": true, 00:13:41.953 "write": true, 00:13:41.953 "unmap": true, 00:13:41.953 "flush": true, 00:13:41.953 "reset": true, 00:13:41.953 "nvme_admin": false, 00:13:41.953 "nvme_io": false, 00:13:41.953 "nvme_io_md": false, 00:13:41.953 "write_zeroes": true, 00:13:41.953 "zcopy": true, 00:13:41.953 "get_zone_info": false, 00:13:41.953 "zone_management": false, 00:13:41.953 "zone_append": false, 00:13:41.953 "compare": false, 00:13:41.953 "compare_and_write": false, 00:13:41.953 "abort": true, 00:13:41.953 "seek_hole": false, 00:13:41.953 "seek_data": false, 00:13:41.953 "copy": true, 00:13:41.953 "nvme_iov_md": false 00:13:41.953 }, 00:13:41.953 "memory_domains": [ 00:13:41.953 { 00:13:41.953 "dma_device_id": "system", 00:13:41.953 "dma_device_type": 1 00:13:41.953 }, 00:13:41.953 { 00:13:41.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.953 "dma_device_type": 2 00:13:41.953 } 00:13:41.953 ], 00:13:41.953 "driver_specific": { 00:13:41.953 "passthru": { 00:13:41.953 "name": "pt1", 00:13:41.953 "base_bdev_name": "malloc1" 00:13:41.953 } 00:13:41.953 } 00:13:41.953 }' 00:13:41.953 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.211 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.469 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.469 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.469 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.469 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:42.469 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:42.726 "name": "pt2", 00:13:42.726 "aliases": [ 00:13:42.726 "00000000-0000-0000-0000-000000000002" 00:13:42.726 ], 00:13:42.726 "product_name": "passthru", 00:13:42.726 "block_size": 512, 00:13:42.726 "num_blocks": 65536, 00:13:42.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.726 "assigned_rate_limits": { 00:13:42.726 "rw_ios_per_sec": 0, 00:13:42.726 "rw_mbytes_per_sec": 0, 00:13:42.726 "r_mbytes_per_sec": 0, 00:13:42.726 "w_mbytes_per_sec": 0 00:13:42.726 }, 00:13:42.726 "claimed": true, 00:13:42.726 "claim_type": "exclusive_write", 00:13:42.726 "zoned": false, 00:13:42.726 "supported_io_types": { 00:13:42.726 "read": true, 00:13:42.726 "write": true, 00:13:42.726 "unmap": true, 00:13:42.726 "flush": true, 00:13:42.726 "reset": true, 00:13:42.726 "nvme_admin": false, 00:13:42.726 "nvme_io": false, 00:13:42.726 "nvme_io_md": false, 00:13:42.726 "write_zeroes": true, 00:13:42.726 "zcopy": true, 00:13:42.726 "get_zone_info": false, 00:13:42.726 "zone_management": false, 00:13:42.726 "zone_append": false, 00:13:42.726 "compare": false, 00:13:42.726 "compare_and_write": false, 00:13:42.726 "abort": true, 00:13:42.726 "seek_hole": false, 00:13:42.726 "seek_data": false, 00:13:42.726 "copy": true, 00:13:42.726 "nvme_iov_md": false 00:13:42.726 }, 00:13:42.726 "memory_domains": [ 00:13:42.726 { 00:13:42.726 "dma_device_id": "system", 00:13:42.726 "dma_device_type": 1 00:13:42.726 }, 00:13:42.726 { 00:13:42.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.726 "dma_device_type": 2 00:13:42.726 } 00:13:42.726 ], 00:13:42.726 "driver_specific": { 00:13:42.726 "passthru": { 00:13:42.726 "name": "pt2", 00:13:42.726 "base_bdev_name": "malloc2" 00:13:42.726 } 00:13:42.726 } 00:13:42.726 }' 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.726 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:42.984 11:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.242 "name": "pt3", 00:13:43.242 "aliases": [ 00:13:43.242 "00000000-0000-0000-0000-000000000003" 00:13:43.242 ], 00:13:43.242 "product_name": "passthru", 00:13:43.242 "block_size": 512, 00:13:43.242 "num_blocks": 65536, 00:13:43.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.242 "assigned_rate_limits": { 00:13:43.242 "rw_ios_per_sec": 0, 00:13:43.242 "rw_mbytes_per_sec": 0, 00:13:43.242 "r_mbytes_per_sec": 0, 00:13:43.242 "w_mbytes_per_sec": 0 00:13:43.242 }, 00:13:43.242 "claimed": true, 00:13:43.242 "claim_type": "exclusive_write", 00:13:43.242 "zoned": false, 00:13:43.242 "supported_io_types": { 00:13:43.242 "read": true, 00:13:43.242 "write": true, 00:13:43.242 "unmap": true, 00:13:43.242 "flush": true, 00:13:43.242 "reset": true, 00:13:43.242 "nvme_admin": false, 00:13:43.242 "nvme_io": false, 00:13:43.242 "nvme_io_md": false, 00:13:43.242 "write_zeroes": true, 00:13:43.242 "zcopy": true, 00:13:43.242 "get_zone_info": false, 00:13:43.242 "zone_management": false, 00:13:43.242 "zone_append": false, 00:13:43.242 "compare": false, 00:13:43.242 "compare_and_write": false, 00:13:43.242 "abort": true, 00:13:43.242 "seek_hole": false, 00:13:43.242 "seek_data": false, 00:13:43.242 "copy": true, 00:13:43.242 "nvme_iov_md": false 00:13:43.242 }, 00:13:43.242 "memory_domains": [ 00:13:43.242 { 00:13:43.242 "dma_device_id": "system", 00:13:43.242 "dma_device_type": 1 00:13:43.242 }, 00:13:43.242 { 00:13:43.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.242 "dma_device_type": 2 00:13:43.242 } 00:13:43.242 ], 00:13:43.242 "driver_specific": { 00:13:43.242 "passthru": { 00:13:43.242 "name": "pt3", 00:13:43.242 "base_bdev_name": "malloc3" 00:13:43.242 } 00:13:43.242 } 00:13:43.242 }' 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.242 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:43.500 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.758 [2024-07-25 11:55:29.730193] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.758 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f16531c1-20c4-479b-9491-0ce1c795a1f3 00:13:43.758 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f16531c1-20c4-479b-9491-0ce1c795a1f3 ']' 00:13:43.758 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:44.018 [2024-07-25 11:55:29.958523] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.018 [2024-07-25 11:55:29.958540] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.018 [2024-07-25 11:55:29.958584] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.018 [2024-07-25 11:55:29.958637] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.018 [2024-07-25 11:55:29.958648] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x218ccb0 name raid_bdev1, state offline 00:13:44.018 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.018 11:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:44.341 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:44.341 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:44.341 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:44.341 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:44.341 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:44.341 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:44.600 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:44.600 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:44.858 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:44.858 11:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:45.117 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.118 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:45.376 [2024-07-25 11:55:31.338097] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:45.376 [2024-07-25 11:55:31.339387] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:45.376 [2024-07-25 11:55:31.339428] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:45.376 [2024-07-25 11:55:31.339468] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:45.376 [2024-07-25 11:55:31.339503] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:45.376 [2024-07-25 11:55:31.339524] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:45.376 [2024-07-25 11:55:31.339546] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.376 [2024-07-25 11:55:31.339555] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2195d50 name raid_bdev1, state configuring 00:13:45.376 request: 00:13:45.376 { 00:13:45.376 "name": "raid_bdev1", 00:13:45.376 "raid_level": "raid0", 00:13:45.376 "base_bdevs": [ 00:13:45.376 "malloc1", 00:13:45.376 "malloc2", 00:13:45.376 "malloc3" 00:13:45.376 ], 00:13:45.376 "strip_size_kb": 64, 00:13:45.376 "superblock": false, 00:13:45.376 "method": "bdev_raid_create", 00:13:45.376 "req_id": 1 00:13:45.376 } 00:13:45.376 Got JSON-RPC error response 00:13:45.376 response: 00:13:45.376 { 00:13:45.376 "code": -17, 00:13:45.376 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:45.376 } 00:13:45.376 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:45.376 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.376 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.376 11:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.376 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.376 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:45.634 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:45.634 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:45.634 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:45.892 [2024-07-25 11:55:31.787219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:45.892 [2024-07-25 11:55:31.787255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.892 [2024-07-25 11:55:31.787270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2189d00 00:13:45.892 [2024-07-25 11:55:31.787281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.892 [2024-07-25 11:55:31.788707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.892 [2024-07-25 11:55:31.788732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:45.892 [2024-07-25 11:55:31.788789] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:45.892 [2024-07-25 11:55:31.788812] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:45.892 pt1 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.892 11:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.150 11:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.150 "name": "raid_bdev1", 00:13:46.150 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:46.150 "strip_size_kb": 64, 00:13:46.150 "state": "configuring", 00:13:46.150 "raid_level": "raid0", 00:13:46.150 "superblock": true, 00:13:46.150 "num_base_bdevs": 3, 00:13:46.150 "num_base_bdevs_discovered": 1, 00:13:46.150 "num_base_bdevs_operational": 3, 00:13:46.150 "base_bdevs_list": [ 00:13:46.150 { 00:13:46.150 "name": "pt1", 00:13:46.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.150 "is_configured": true, 00:13:46.150 "data_offset": 2048, 00:13:46.150 "data_size": 63488 00:13:46.150 }, 00:13:46.150 { 00:13:46.150 "name": null, 00:13:46.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.150 "is_configured": false, 00:13:46.150 "data_offset": 2048, 00:13:46.150 "data_size": 63488 00:13:46.150 }, 00:13:46.150 { 00:13:46.150 "name": null, 00:13:46.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.150 "is_configured": false, 00:13:46.150 "data_offset": 2048, 00:13:46.150 "data_size": 63488 00:13:46.150 } 00:13:46.150 ] 00:13:46.150 }' 00:13:46.150 11:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.150 11:55:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.719 11:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:46.719 11:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.719 [2024-07-25 11:55:32.809936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.719 [2024-07-25 11:55:32.809983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.719 [2024-07-25 11:55:32.810003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x218a370 00:13:46.719 [2024-07-25 11:55:32.810014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.719 [2024-07-25 11:55:32.810373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.719 [2024-07-25 11:55:32.810397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.719 [2024-07-25 11:55:32.810459] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:46.719 [2024-07-25 11:55:32.810477] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.719 pt2 00:13:46.719 11:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:46.978 [2024-07-25 11:55:33.022504] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.978 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.236 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.236 "name": "raid_bdev1", 00:13:47.236 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:47.236 "strip_size_kb": 64, 00:13:47.236 "state": "configuring", 00:13:47.236 "raid_level": "raid0", 00:13:47.236 "superblock": true, 00:13:47.236 "num_base_bdevs": 3, 00:13:47.236 "num_base_bdevs_discovered": 1, 00:13:47.236 "num_base_bdevs_operational": 3, 00:13:47.236 "base_bdevs_list": [ 00:13:47.236 { 00:13:47.236 "name": "pt1", 00:13:47.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.236 "is_configured": true, 00:13:47.236 "data_offset": 2048, 00:13:47.236 "data_size": 63488 00:13:47.236 }, 00:13:47.236 { 00:13:47.236 "name": null, 00:13:47.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.236 "is_configured": false, 00:13:47.236 "data_offset": 2048, 00:13:47.236 "data_size": 63488 00:13:47.236 }, 00:13:47.236 { 00:13:47.236 "name": null, 00:13:47.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.236 "is_configured": false, 00:13:47.236 "data_offset": 2048, 00:13:47.236 "data_size": 63488 00:13:47.236 } 00:13:47.236 ] 00:13:47.236 }' 00:13:47.236 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.236 11:55:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.801 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:47.801 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:47.801 11:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.060 [2024-07-25 11:55:34.041199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.060 [2024-07-25 11:55:34.041246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.060 [2024-07-25 11:55:34.041264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fea390 00:13:48.060 [2024-07-25 11:55:34.041275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.060 [2024-07-25 11:55:34.041588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.060 [2024-07-25 11:55:34.041604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.060 [2024-07-25 11:55:34.041659] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:48.060 [2024-07-25 11:55:34.041677] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.060 pt2 00:13:48.060 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:48.060 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:48.060 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:48.318 [2024-07-25 11:55:34.269798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:48.318 [2024-07-25 11:55:34.269831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.318 [2024-07-25 11:55:34.269846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fe9e20 00:13:48.318 [2024-07-25 11:55:34.269856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.318 [2024-07-25 11:55:34.270127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.318 [2024-07-25 11:55:34.270152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:48.318 [2024-07-25 11:55:34.270199] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:48.318 [2024-07-25 11:55:34.270215] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:48.318 [2024-07-25 11:55:34.270310] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x218b530 00:13:48.318 [2024-07-25 11:55:34.270319] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:48.318 [2024-07-25 11:55:34.270471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fee540 00:13:48.318 [2024-07-25 11:55:34.270581] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x218b530 00:13:48.318 [2024-07-25 11:55:34.270589] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x218b530 00:13:48.318 [2024-07-25 11:55:34.270675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.318 pt3 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.318 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.576 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.576 "name": "raid_bdev1", 00:13:48.576 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:48.576 "strip_size_kb": 64, 00:13:48.576 "state": "online", 00:13:48.576 "raid_level": "raid0", 00:13:48.576 "superblock": true, 00:13:48.576 "num_base_bdevs": 3, 00:13:48.576 "num_base_bdevs_discovered": 3, 00:13:48.576 "num_base_bdevs_operational": 3, 00:13:48.576 "base_bdevs_list": [ 00:13:48.576 { 00:13:48.576 "name": "pt1", 00:13:48.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.576 "is_configured": true, 00:13:48.576 "data_offset": 2048, 00:13:48.576 "data_size": 63488 00:13:48.576 }, 00:13:48.576 { 00:13:48.576 "name": "pt2", 00:13:48.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.576 "is_configured": true, 00:13:48.576 "data_offset": 2048, 00:13:48.576 "data_size": 63488 00:13:48.576 }, 00:13:48.577 { 00:13:48.577 "name": "pt3", 00:13:48.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.577 "is_configured": true, 00:13:48.577 "data_offset": 2048, 00:13:48.577 "data_size": 63488 00:13:48.577 } 00:13:48.577 ] 00:13:48.577 }' 00:13:48.577 11:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.577 11:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:49.512 [2024-07-25 11:55:35.545437] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:49.512 "name": "raid_bdev1", 00:13:49.512 "aliases": [ 00:13:49.512 "f16531c1-20c4-479b-9491-0ce1c795a1f3" 00:13:49.512 ], 00:13:49.512 "product_name": "Raid Volume", 00:13:49.512 "block_size": 512, 00:13:49.512 "num_blocks": 190464, 00:13:49.512 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:49.512 "assigned_rate_limits": { 00:13:49.512 "rw_ios_per_sec": 0, 00:13:49.512 "rw_mbytes_per_sec": 0, 00:13:49.512 "r_mbytes_per_sec": 0, 00:13:49.512 "w_mbytes_per_sec": 0 00:13:49.512 }, 00:13:49.512 "claimed": false, 00:13:49.512 "zoned": false, 00:13:49.512 "supported_io_types": { 00:13:49.512 "read": true, 00:13:49.512 "write": true, 00:13:49.512 "unmap": true, 00:13:49.512 "flush": true, 00:13:49.512 "reset": true, 00:13:49.512 "nvme_admin": false, 00:13:49.512 "nvme_io": false, 00:13:49.512 "nvme_io_md": false, 00:13:49.512 "write_zeroes": true, 00:13:49.512 "zcopy": false, 00:13:49.512 "get_zone_info": false, 00:13:49.512 "zone_management": false, 00:13:49.512 "zone_append": false, 00:13:49.512 "compare": false, 00:13:49.512 "compare_and_write": false, 00:13:49.512 "abort": false, 00:13:49.512 "seek_hole": false, 00:13:49.512 "seek_data": false, 00:13:49.512 "copy": false, 00:13:49.512 "nvme_iov_md": false 00:13:49.512 }, 00:13:49.512 "memory_domains": [ 00:13:49.512 { 00:13:49.512 "dma_device_id": "system", 00:13:49.512 "dma_device_type": 1 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.512 "dma_device_type": 2 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "dma_device_id": "system", 00:13:49.512 "dma_device_type": 1 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.512 "dma_device_type": 2 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "dma_device_id": "system", 00:13:49.512 "dma_device_type": 1 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.512 "dma_device_type": 2 00:13:49.512 } 00:13:49.512 ], 00:13:49.512 "driver_specific": { 00:13:49.512 "raid": { 00:13:49.512 "uuid": "f16531c1-20c4-479b-9491-0ce1c795a1f3", 00:13:49.512 "strip_size_kb": 64, 00:13:49.512 "state": "online", 00:13:49.512 "raid_level": "raid0", 00:13:49.512 "superblock": true, 00:13:49.512 "num_base_bdevs": 3, 00:13:49.512 "num_base_bdevs_discovered": 3, 00:13:49.512 "num_base_bdevs_operational": 3, 00:13:49.512 "base_bdevs_list": [ 00:13:49.512 { 00:13:49.512 "name": "pt1", 00:13:49.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.512 "is_configured": true, 00:13:49.512 "data_offset": 2048, 00:13:49.512 "data_size": 63488 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "name": "pt2", 00:13:49.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.512 "is_configured": true, 00:13:49.512 "data_offset": 2048, 00:13:49.512 "data_size": 63488 00:13:49.512 }, 00:13:49.512 { 00:13:49.512 "name": "pt3", 00:13:49.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:49.512 "is_configured": true, 00:13:49.512 "data_offset": 2048, 00:13:49.512 "data_size": 63488 00:13:49.512 } 00:13:49.512 ] 00:13:49.512 } 00:13:49.512 } 00:13:49.512 }' 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:49.512 pt2 00:13:49.512 pt3' 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:49.512 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.771 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.771 "name": "pt1", 00:13:49.771 "aliases": [ 00:13:49.771 "00000000-0000-0000-0000-000000000001" 00:13:49.771 ], 00:13:49.771 "product_name": "passthru", 00:13:49.771 "block_size": 512, 00:13:49.771 "num_blocks": 65536, 00:13:49.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.771 "assigned_rate_limits": { 00:13:49.771 "rw_ios_per_sec": 0, 00:13:49.771 "rw_mbytes_per_sec": 0, 00:13:49.771 "r_mbytes_per_sec": 0, 00:13:49.771 "w_mbytes_per_sec": 0 00:13:49.771 }, 00:13:49.771 "claimed": true, 00:13:49.771 "claim_type": "exclusive_write", 00:13:49.771 "zoned": false, 00:13:49.771 "supported_io_types": { 00:13:49.771 "read": true, 00:13:49.771 "write": true, 00:13:49.771 "unmap": true, 00:13:49.771 "flush": true, 00:13:49.771 "reset": true, 00:13:49.771 "nvme_admin": false, 00:13:49.771 "nvme_io": false, 00:13:49.771 "nvme_io_md": false, 00:13:49.771 "write_zeroes": true, 00:13:49.771 "zcopy": true, 00:13:49.771 "get_zone_info": false, 00:13:49.771 "zone_management": false, 00:13:49.771 "zone_append": false, 00:13:49.771 "compare": false, 00:13:49.771 "compare_and_write": false, 00:13:49.771 "abort": true, 00:13:49.771 "seek_hole": false, 00:13:49.771 "seek_data": false, 00:13:49.771 "copy": true, 00:13:49.771 "nvme_iov_md": false 00:13:49.771 }, 00:13:49.771 "memory_domains": [ 00:13:49.771 { 00:13:49.771 "dma_device_id": "system", 00:13:49.771 "dma_device_type": 1 00:13:49.771 }, 00:13:49.771 { 00:13:49.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.771 "dma_device_type": 2 00:13:49.771 } 00:13:49.771 ], 00:13:49.771 "driver_specific": { 00:13:49.771 "passthru": { 00:13:49.771 "name": "pt1", 00:13:49.771 "base_bdev_name": "malloc1" 00:13:49.771 } 00:13:49.771 } 00:13:49.771 }' 00:13:49.771 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.771 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.029 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.029 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.029 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.029 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.029 11:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.029 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.029 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.029 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.029 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.288 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.288 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.288 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:50.288 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:50.288 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:50.288 "name": "pt2", 00:13:50.288 "aliases": [ 00:13:50.288 "00000000-0000-0000-0000-000000000002" 00:13:50.288 ], 00:13:50.288 "product_name": "passthru", 00:13:50.288 "block_size": 512, 00:13:50.288 "num_blocks": 65536, 00:13:50.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.288 "assigned_rate_limits": { 00:13:50.288 "rw_ios_per_sec": 0, 00:13:50.288 "rw_mbytes_per_sec": 0, 00:13:50.288 "r_mbytes_per_sec": 0, 00:13:50.288 "w_mbytes_per_sec": 0 00:13:50.288 }, 00:13:50.288 "claimed": true, 00:13:50.288 "claim_type": "exclusive_write", 00:13:50.288 "zoned": false, 00:13:50.288 "supported_io_types": { 00:13:50.288 "read": true, 00:13:50.288 "write": true, 00:13:50.288 "unmap": true, 00:13:50.288 "flush": true, 00:13:50.288 "reset": true, 00:13:50.288 "nvme_admin": false, 00:13:50.288 "nvme_io": false, 00:13:50.288 "nvme_io_md": false, 00:13:50.288 "write_zeroes": true, 00:13:50.288 "zcopy": true, 00:13:50.288 "get_zone_info": false, 00:13:50.288 "zone_management": false, 00:13:50.288 "zone_append": false, 00:13:50.288 "compare": false, 00:13:50.288 "compare_and_write": false, 00:13:50.288 "abort": true, 00:13:50.288 "seek_hole": false, 00:13:50.288 "seek_data": false, 00:13:50.288 "copy": true, 00:13:50.288 "nvme_iov_md": false 00:13:50.288 }, 00:13:50.288 "memory_domains": [ 00:13:50.288 { 00:13:50.288 "dma_device_id": "system", 00:13:50.288 "dma_device_type": 1 00:13:50.288 }, 00:13:50.288 { 00:13:50.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.288 "dma_device_type": 2 00:13:50.288 } 00:13:50.288 ], 00:13:50.288 "driver_specific": { 00:13:50.288 "passthru": { 00:13:50.288 "name": "pt2", 00:13:50.288 "base_bdev_name": "malloc2" 00:13:50.288 } 00:13:50.288 } 00:13:50.288 }' 00:13:50.288 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.545 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.804 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.804 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.804 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.804 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:50.804 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.062 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.062 "name": "pt3", 00:13:51.062 "aliases": [ 00:13:51.062 "00000000-0000-0000-0000-000000000003" 00:13:51.062 ], 00:13:51.062 "product_name": "passthru", 00:13:51.062 "block_size": 512, 00:13:51.062 "num_blocks": 65536, 00:13:51.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:51.062 "assigned_rate_limits": { 00:13:51.062 "rw_ios_per_sec": 0, 00:13:51.062 "rw_mbytes_per_sec": 0, 00:13:51.062 "r_mbytes_per_sec": 0, 00:13:51.062 "w_mbytes_per_sec": 0 00:13:51.062 }, 00:13:51.062 "claimed": true, 00:13:51.062 "claim_type": "exclusive_write", 00:13:51.062 "zoned": false, 00:13:51.062 "supported_io_types": { 00:13:51.062 "read": true, 00:13:51.062 "write": true, 00:13:51.062 "unmap": true, 00:13:51.062 "flush": true, 00:13:51.062 "reset": true, 00:13:51.062 "nvme_admin": false, 00:13:51.062 "nvme_io": false, 00:13:51.062 "nvme_io_md": false, 00:13:51.062 "write_zeroes": true, 00:13:51.062 "zcopy": true, 00:13:51.062 "get_zone_info": false, 00:13:51.062 "zone_management": false, 00:13:51.062 "zone_append": false, 00:13:51.062 "compare": false, 00:13:51.062 "compare_and_write": false, 00:13:51.062 "abort": true, 00:13:51.062 "seek_hole": false, 00:13:51.062 "seek_data": false, 00:13:51.062 "copy": true, 00:13:51.062 "nvme_iov_md": false 00:13:51.062 }, 00:13:51.062 "memory_domains": [ 00:13:51.062 { 00:13:51.062 "dma_device_id": "system", 00:13:51.062 "dma_device_type": 1 00:13:51.062 }, 00:13:51.062 { 00:13:51.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.062 "dma_device_type": 2 00:13:51.062 } 00:13:51.062 ], 00:13:51.062 "driver_specific": { 00:13:51.062 "passthru": { 00:13:51.062 "name": "pt3", 00:13:51.062 "base_bdev_name": "malloc3" 00:13:51.062 } 00:13:51.062 } 00:13:51.062 }' 00:13:51.062 11:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.062 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.321 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.321 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.321 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.321 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.321 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:51.321 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:51.579 [2024-07-25 11:55:37.502592] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f16531c1-20c4-479b-9491-0ce1c795a1f3 '!=' f16531c1-20c4-479b-9491-0ce1c795a1f3 ']' 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4128173 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4128173 ']' 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4128173 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4128173 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4128173' 00:13:51.579 killing process with pid 4128173 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4128173 00:13:51.579 [2024-07-25 11:55:37.578505] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.579 [2024-07-25 11:55:37.578562] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.579 [2024-07-25 11:55:37.578610] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.579 [2024-07-25 11:55:37.578621] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x218b530 name raid_bdev1, state offline 00:13:51.579 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4128173 00:13:51.579 [2024-07-25 11:55:37.602979] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.838 11:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:51.838 00:13:51.838 real 0m13.627s 00:13:51.838 user 0m24.586s 00:13:51.838 sys 0m2.425s 00:13:51.838 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.838 11:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.838 ************************************ 00:13:51.838 END TEST raid_superblock_test 00:13:51.838 ************************************ 00:13:51.838 11:55:37 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:51.838 11:55:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:51.838 11:55:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.838 11:55:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.838 ************************************ 00:13:51.838 START TEST raid_read_error_test 00:13:51.838 ************************************ 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8YNf6CBwoF 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4130839 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4130839 /var/tmp/spdk-raid.sock 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4130839 ']' 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.838 11:55:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.838 [2024-07-25 11:55:37.955786] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:51.838 [2024-07-25 11:55:37.955843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130839 ] 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.097 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:52.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.098 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:52.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.098 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:52.098 [2024-07-25 11:55:38.075407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.098 [2024-07-25 11:55:38.161235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.356 [2024-07-25 11:55:38.222291] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.356 [2024-07-25 11:55:38.222323] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.922 11:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.922 11:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:52.922 11:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:52.922 11:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.180 BaseBdev1_malloc 00:13:53.180 11:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:53.180 true 00:13:53.180 11:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:53.439 [2024-07-25 11:55:39.496377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:53.439 [2024-07-25 11:55:39.496415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.439 [2024-07-25 11:55:39.496432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe74190 00:13:53.439 [2024-07-25 11:55:39.496443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.439 [2024-07-25 11:55:39.497925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.439 [2024-07-25 11:55:39.497950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.439 BaseBdev1 00:13:53.439 11:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.439 11:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.697 BaseBdev2_malloc 00:13:53.697 11:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:53.955 true 00:13:53.955 11:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:54.213 [2024-07-25 11:55:40.186547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:54.213 [2024-07-25 11:55:40.186587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.213 [2024-07-25 11:55:40.186604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe78e20 00:13:54.213 [2024-07-25 11:55:40.186615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.213 [2024-07-25 11:55:40.188060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.213 [2024-07-25 11:55:40.188085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.213 BaseBdev2 00:13:54.213 11:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.213 11:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:54.472 BaseBdev3_malloc 00:13:54.472 11:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:54.730 true 00:13:54.730 11:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:54.989 [2024-07-25 11:55:40.864575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:54.989 [2024-07-25 11:55:40.864615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.989 [2024-07-25 11:55:40.864634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe79d90 00:13:54.989 [2024-07-25 11:55:40.864646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.989 [2024-07-25 11:55:40.866017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.989 [2024-07-25 11:55:40.866043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:54.989 BaseBdev3 00:13:54.989 11:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:54.989 [2024-07-25 11:55:41.093213] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.989 [2024-07-25 11:55:41.094418] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.989 [2024-07-25 11:55:41.094482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.989 [2024-07-25 11:55:41.094669] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe7bba0 00:13:54.989 [2024-07-25 11:55:41.094680] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:54.989 [2024-07-25 11:55:41.094851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xccfab0 00:13:54.989 [2024-07-25 11:55:41.094989] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe7bba0 00:13:54.989 [2024-07-25 11:55:41.094998] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe7bba0 00:13:54.989 [2024-07-25 11:55:41.095094] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.248 "name": "raid_bdev1", 00:13:55.248 "uuid": "1b985e10-31e1-463d-bb67-0a76354eac01", 00:13:55.248 "strip_size_kb": 64, 00:13:55.248 "state": "online", 00:13:55.248 "raid_level": "raid0", 00:13:55.248 "superblock": true, 00:13:55.248 "num_base_bdevs": 3, 00:13:55.248 "num_base_bdevs_discovered": 3, 00:13:55.248 "num_base_bdevs_operational": 3, 00:13:55.248 "base_bdevs_list": [ 00:13:55.248 { 00:13:55.248 "name": "BaseBdev1", 00:13:55.248 "uuid": "566d9954-c73e-5abe-bba6-2d74a7888b06", 00:13:55.248 "is_configured": true, 00:13:55.248 "data_offset": 2048, 00:13:55.248 "data_size": 63488 00:13:55.248 }, 00:13:55.248 { 00:13:55.248 "name": "BaseBdev2", 00:13:55.248 "uuid": "cfbfd4b1-1b42-5c59-bff8-4c9f5773aab9", 00:13:55.248 "is_configured": true, 00:13:55.248 "data_offset": 2048, 00:13:55.248 "data_size": 63488 00:13:55.248 }, 00:13:55.248 { 00:13:55.248 "name": "BaseBdev3", 00:13:55.248 "uuid": "1f93c421-e065-5328-8b40-d998e0f784b1", 00:13:55.248 "is_configured": true, 00:13:55.248 "data_offset": 2048, 00:13:55.248 "data_size": 63488 00:13:55.248 } 00:13:55.248 ] 00:13:55.248 }' 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.248 11:55:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.814 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:55.814 11:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:56.073 [2024-07-25 11:55:41.975783] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9ce6c0 00:13:57.012 11:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.012 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.271 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.271 "name": "raid_bdev1", 00:13:57.271 "uuid": "1b985e10-31e1-463d-bb67-0a76354eac01", 00:13:57.271 "strip_size_kb": 64, 00:13:57.271 "state": "online", 00:13:57.271 "raid_level": "raid0", 00:13:57.271 "superblock": true, 00:13:57.271 "num_base_bdevs": 3, 00:13:57.271 "num_base_bdevs_discovered": 3, 00:13:57.271 "num_base_bdevs_operational": 3, 00:13:57.271 "base_bdevs_list": [ 00:13:57.271 { 00:13:57.271 "name": "BaseBdev1", 00:13:57.271 "uuid": "566d9954-c73e-5abe-bba6-2d74a7888b06", 00:13:57.271 "is_configured": true, 00:13:57.271 "data_offset": 2048, 00:13:57.271 "data_size": 63488 00:13:57.271 }, 00:13:57.271 { 00:13:57.271 "name": "BaseBdev2", 00:13:57.271 "uuid": "cfbfd4b1-1b42-5c59-bff8-4c9f5773aab9", 00:13:57.271 "is_configured": true, 00:13:57.271 "data_offset": 2048, 00:13:57.271 "data_size": 63488 00:13:57.271 }, 00:13:57.271 { 00:13:57.271 "name": "BaseBdev3", 00:13:57.271 "uuid": "1f93c421-e065-5328-8b40-d998e0f784b1", 00:13:57.271 "is_configured": true, 00:13:57.271 "data_offset": 2048, 00:13:57.271 "data_size": 63488 00:13:57.271 } 00:13:57.271 ] 00:13:57.271 }' 00:13:57.271 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.271 11:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.839 11:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:58.099 [2024-07-25 11:55:44.134759] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.099 [2024-07-25 11:55:44.134788] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.099 [2024-07-25 11:55:44.137691] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.099 [2024-07-25 11:55:44.137724] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.099 [2024-07-25 11:55:44.137755] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.099 [2024-07-25 11:55:44.137765] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe7bba0 name raid_bdev1, state offline 00:13:58.099 0 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4130839 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4130839 ']' 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4130839 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4130839 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.099 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4130839' 00:13:58.099 killing process with pid 4130839 00:13:58.100 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4130839 00:13:58.100 [2024-07-25 11:55:44.212184] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.100 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4130839 00:13:58.359 [2024-07-25 11:55:44.230895] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8YNf6CBwoF 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:13:58.359 00:13:58.359 real 0m6.558s 00:13:58.359 user 0m10.312s 00:13:58.359 sys 0m1.172s 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.359 11:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.359 ************************************ 00:13:58.359 END TEST raid_read_error_test 00:13:58.359 ************************************ 00:13:58.652 11:55:44 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:58.652 11:55:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:58.652 11:55:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.652 11:55:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 ************************************ 00:13:58.652 START TEST raid_write_error_test 00:13:58.652 ************************************ 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Oq5qWwLrY8 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4132006 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4132006 /var/tmp/spdk-raid.sock 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4132006 ']' 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:58.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.652 11:55:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 [2024-07-25 11:55:44.599614] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:58.652 [2024-07-25 11:55:44.599676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4132006 ] 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.652 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:58.653 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:58.653 [2024-07-25 11:55:44.730640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.912 [2024-07-25 11:55:44.813469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.912 [2024-07-25 11:55:44.874732] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.912 [2024-07-25 11:55:44.874775] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.480 11:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.480 11:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:59.480 11:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:59.480 11:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.739 BaseBdev1_malloc 00:13:59.739 11:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:59.999 true 00:13:59.999 11:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:00.258 [2024-07-25 11:55:46.176200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:00.258 [2024-07-25 11:55:46.176244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.258 [2024-07-25 11:55:46.176260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2757190 00:14:00.258 [2024-07-25 11:55:46.176271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.258 [2024-07-25 11:55:46.177738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.258 [2024-07-25 11:55:46.177764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.258 BaseBdev1 00:14:00.258 11:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:00.258 11:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.517 BaseBdev2_malloc 00:14:00.517 11:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:00.777 true 00:14:00.777 11:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:00.777 [2024-07-25 11:55:46.862177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:00.777 [2024-07-25 11:55:46.862215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.777 [2024-07-25 11:55:46.862231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275be20 00:14:00.777 [2024-07-25 11:55:46.862242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.777 [2024-07-25 11:55:46.863504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.777 [2024-07-25 11:55:46.863529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.777 BaseBdev2 00:14:00.777 11:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:00.777 11:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:01.036 BaseBdev3_malloc 00:14:01.036 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:01.295 true 00:14:01.295 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:01.554 [2024-07-25 11:55:47.548088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:01.554 [2024-07-25 11:55:47.548126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.554 [2024-07-25 11:55:47.548149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275cd90 00:14:01.554 [2024-07-25 11:55:47.548161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.554 [2024-07-25 11:55:47.549426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.554 [2024-07-25 11:55:47.549452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:01.554 BaseBdev3 00:14:01.554 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:01.813 [2024-07-25 11:55:47.772710] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.813 [2024-07-25 11:55:47.773783] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.813 [2024-07-25 11:55:47.773845] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.813 [2024-07-25 11:55:47.774024] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x275eba0 00:14:01.813 [2024-07-25 11:55:47.774035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:01.813 [2024-07-25 11:55:47.774198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b2ab0 00:14:01.813 [2024-07-25 11:55:47.774332] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x275eba0 00:14:01.813 [2024-07-25 11:55:47.774341] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x275eba0 00:14:01.813 [2024-07-25 11:55:47.774431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.814 11:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.073 11:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.073 "name": "raid_bdev1", 00:14:02.073 "uuid": "b7924be0-95dd-44e0-aaa5-7a053c4f5d23", 00:14:02.073 "strip_size_kb": 64, 00:14:02.073 "state": "online", 00:14:02.073 "raid_level": "raid0", 00:14:02.073 "superblock": true, 00:14:02.073 "num_base_bdevs": 3, 00:14:02.073 "num_base_bdevs_discovered": 3, 00:14:02.073 "num_base_bdevs_operational": 3, 00:14:02.073 "base_bdevs_list": [ 00:14:02.073 { 00:14:02.073 "name": "BaseBdev1", 00:14:02.073 "uuid": "ce7bfaf9-c805-515e-9b59-0c49df70dbb4", 00:14:02.073 "is_configured": true, 00:14:02.073 "data_offset": 2048, 00:14:02.073 "data_size": 63488 00:14:02.073 }, 00:14:02.073 { 00:14:02.073 "name": "BaseBdev2", 00:14:02.073 "uuid": "e44bd74f-651a-535c-9d20-46290b5199b8", 00:14:02.073 "is_configured": true, 00:14:02.073 "data_offset": 2048, 00:14:02.073 "data_size": 63488 00:14:02.073 }, 00:14:02.073 { 00:14:02.073 "name": "BaseBdev3", 00:14:02.073 "uuid": "650d6c4a-d9a5-5eed-8c8b-bca3d1bf17d8", 00:14:02.073 "is_configured": true, 00:14:02.073 "data_offset": 2048, 00:14:02.073 "data_size": 63488 00:14:02.073 } 00:14:02.073 ] 00:14:02.073 }' 00:14:02.073 11:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.073 11:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.641 11:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:02.641 11:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:02.641 [2024-07-25 11:55:48.683364] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22b16c0 00:14:03.578 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.837 11:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.096 11:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.096 "name": "raid_bdev1", 00:14:04.096 "uuid": "b7924be0-95dd-44e0-aaa5-7a053c4f5d23", 00:14:04.096 "strip_size_kb": 64, 00:14:04.096 "state": "online", 00:14:04.096 "raid_level": "raid0", 00:14:04.096 "superblock": true, 00:14:04.096 "num_base_bdevs": 3, 00:14:04.096 "num_base_bdevs_discovered": 3, 00:14:04.096 "num_base_bdevs_operational": 3, 00:14:04.096 "base_bdevs_list": [ 00:14:04.096 { 00:14:04.096 "name": "BaseBdev1", 00:14:04.096 "uuid": "ce7bfaf9-c805-515e-9b59-0c49df70dbb4", 00:14:04.096 "is_configured": true, 00:14:04.096 "data_offset": 2048, 00:14:04.096 "data_size": 63488 00:14:04.096 }, 00:14:04.096 { 00:14:04.096 "name": "BaseBdev2", 00:14:04.096 "uuid": "e44bd74f-651a-535c-9d20-46290b5199b8", 00:14:04.096 "is_configured": true, 00:14:04.096 "data_offset": 2048, 00:14:04.096 "data_size": 63488 00:14:04.096 }, 00:14:04.096 { 00:14:04.096 "name": "BaseBdev3", 00:14:04.096 "uuid": "650d6c4a-d9a5-5eed-8c8b-bca3d1bf17d8", 00:14:04.096 "is_configured": true, 00:14:04.096 "data_offset": 2048, 00:14:04.096 "data_size": 63488 00:14:04.096 } 00:14:04.096 ] 00:14:04.096 }' 00:14:04.096 11:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.096 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.665 11:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:04.924 [2024-07-25 11:55:50.850375] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.925 [2024-07-25 11:55:50.850414] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.925 [2024-07-25 11:55:50.853331] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.925 [2024-07-25 11:55:50.853365] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.925 [2024-07-25 11:55:50.853396] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.925 [2024-07-25 11:55:50.853407] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x275eba0 name raid_bdev1, state offline 00:14:04.925 0 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4132006 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4132006 ']' 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4132006 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4132006 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4132006' 00:14:04.925 killing process with pid 4132006 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4132006 00:14:04.925 [2024-07-25 11:55:50.923090] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.925 11:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4132006 00:14:04.925 [2024-07-25 11:55:50.941349] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Oq5qWwLrY8 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:14:05.185 00:14:05.185 real 0m6.627s 00:14:05.185 user 0m10.416s 00:14:05.185 sys 0m1.207s 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.185 11:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.185 ************************************ 00:14:05.185 END TEST raid_write_error_test 00:14:05.185 ************************************ 00:14:05.185 11:55:51 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:05.185 11:55:51 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:05.185 11:55:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:05.185 11:55:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.185 11:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.185 ************************************ 00:14:05.185 START TEST raid_state_function_test 00:14:05.185 ************************************ 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:05.185 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4133160 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4133160' 00:14:05.186 Process raid pid: 4133160 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4133160 /var/tmp/spdk-raid.sock 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4133160 ']' 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.186 11:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.186 [2024-07-25 11:55:51.297664] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:05.186 [2024-07-25 11:55:51.297722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:05.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.446 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:05.446 [2024-07-25 11:55:51.427819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.446 [2024-07-25 11:55:51.509050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.705 [2024-07-25 11:55:51.573614] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.705 [2024-07-25 11:55:51.573647] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.273 11:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.273 11:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:06.273 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:06.533 [2024-07-25 11:55:52.401464] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.533 [2024-07-25 11:55:52.401503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.533 [2024-07-25 11:55:52.401513] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.533 [2024-07-25 11:55:52.401524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.533 [2024-07-25 11:55:52.401531] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.533 [2024-07-25 11:55:52.401541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.533 "name": "Existed_Raid", 00:14:06.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.533 "strip_size_kb": 64, 00:14:06.533 "state": "configuring", 00:14:06.533 "raid_level": "concat", 00:14:06.533 "superblock": false, 00:14:06.533 "num_base_bdevs": 3, 00:14:06.533 "num_base_bdevs_discovered": 0, 00:14:06.533 "num_base_bdevs_operational": 3, 00:14:06.533 "base_bdevs_list": [ 00:14:06.533 { 00:14:06.533 "name": "BaseBdev1", 00:14:06.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.533 "is_configured": false, 00:14:06.533 "data_offset": 0, 00:14:06.533 "data_size": 0 00:14:06.533 }, 00:14:06.533 { 00:14:06.533 "name": "BaseBdev2", 00:14:06.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.533 "is_configured": false, 00:14:06.533 "data_offset": 0, 00:14:06.533 "data_size": 0 00:14:06.533 }, 00:14:06.533 { 00:14:06.533 "name": "BaseBdev3", 00:14:06.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.533 "is_configured": false, 00:14:06.533 "data_offset": 0, 00:14:06.533 "data_size": 0 00:14:06.533 } 00:14:06.533 ] 00:14:06.533 }' 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.533 11:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.102 11:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:07.361 [2024-07-25 11:55:53.411974] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.361 [2024-07-25 11:55:53.412003] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2647f40 name Existed_Raid, state configuring 00:14:07.361 11:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:07.928 [2024-07-25 11:55:53.909289] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.928 [2024-07-25 11:55:53.909317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.928 [2024-07-25 11:55:53.909326] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.928 [2024-07-25 11:55:53.909337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.928 [2024-07-25 11:55:53.909344] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.928 [2024-07-25 11:55:53.909354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.928 11:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.187 [2024-07-25 11:55:54.159368] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.187 BaseBdev1 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.187 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.754 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.014 [ 00:14:09.014 { 00:14:09.014 "name": "BaseBdev1", 00:14:09.014 "aliases": [ 00:14:09.014 "bef5cce6-795a-4823-8f48-27f8999d16e6" 00:14:09.014 ], 00:14:09.014 "product_name": "Malloc disk", 00:14:09.014 "block_size": 512, 00:14:09.014 "num_blocks": 65536, 00:14:09.014 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:09.014 "assigned_rate_limits": { 00:14:09.014 "rw_ios_per_sec": 0, 00:14:09.014 "rw_mbytes_per_sec": 0, 00:14:09.014 "r_mbytes_per_sec": 0, 00:14:09.014 "w_mbytes_per_sec": 0 00:14:09.014 }, 00:14:09.014 "claimed": true, 00:14:09.014 "claim_type": "exclusive_write", 00:14:09.014 "zoned": false, 00:14:09.014 "supported_io_types": { 00:14:09.014 "read": true, 00:14:09.014 "write": true, 00:14:09.014 "unmap": true, 00:14:09.014 "flush": true, 00:14:09.014 "reset": true, 00:14:09.014 "nvme_admin": false, 00:14:09.014 "nvme_io": false, 00:14:09.014 "nvme_io_md": false, 00:14:09.014 "write_zeroes": true, 00:14:09.014 "zcopy": true, 00:14:09.014 "get_zone_info": false, 00:14:09.014 "zone_management": false, 00:14:09.014 "zone_append": false, 00:14:09.014 "compare": false, 00:14:09.014 "compare_and_write": false, 00:14:09.014 "abort": true, 00:14:09.014 "seek_hole": false, 00:14:09.014 "seek_data": false, 00:14:09.014 "copy": true, 00:14:09.014 "nvme_iov_md": false 00:14:09.014 }, 00:14:09.014 "memory_domains": [ 00:14:09.014 { 00:14:09.014 "dma_device_id": "system", 00:14:09.014 "dma_device_type": 1 00:14:09.014 }, 00:14:09.014 { 00:14:09.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.014 "dma_device_type": 2 00:14:09.014 } 00:14:09.014 ], 00:14:09.014 "driver_specific": {} 00:14:09.014 } 00:14:09.014 ] 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.014 11:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.273 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.273 "name": "Existed_Raid", 00:14:09.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.273 "strip_size_kb": 64, 00:14:09.273 "state": "configuring", 00:14:09.273 "raid_level": "concat", 00:14:09.273 "superblock": false, 00:14:09.273 "num_base_bdevs": 3, 00:14:09.273 "num_base_bdevs_discovered": 1, 00:14:09.273 "num_base_bdevs_operational": 3, 00:14:09.273 "base_bdevs_list": [ 00:14:09.273 { 00:14:09.273 "name": "BaseBdev1", 00:14:09.273 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:09.273 "is_configured": true, 00:14:09.273 "data_offset": 0, 00:14:09.273 "data_size": 65536 00:14:09.273 }, 00:14:09.273 { 00:14:09.273 "name": "BaseBdev2", 00:14:09.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.273 "is_configured": false, 00:14:09.273 "data_offset": 0, 00:14:09.273 "data_size": 0 00:14:09.273 }, 00:14:09.273 { 00:14:09.273 "name": "BaseBdev3", 00:14:09.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.273 "is_configured": false, 00:14:09.273 "data_offset": 0, 00:14:09.273 "data_size": 0 00:14:09.273 } 00:14:09.273 ] 00:14:09.273 }' 00:14:09.273 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.273 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.841 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.841 [2024-07-25 11:55:55.867884] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.841 [2024-07-25 11:55:55.867926] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2647810 name Existed_Raid, state configuring 00:14:09.841 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:10.100 [2024-07-25 11:55:56.100526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.100 [2024-07-25 11:55:56.101909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.100 [2024-07-25 11:55:56.101944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.100 [2024-07-25 11:55:56.101954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.100 [2024-07-25 11:55:56.101965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.100 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.101 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.101 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.360 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.360 "name": "Existed_Raid", 00:14:10.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.360 "strip_size_kb": 64, 00:14:10.360 "state": "configuring", 00:14:10.360 "raid_level": "concat", 00:14:10.360 "superblock": false, 00:14:10.360 "num_base_bdevs": 3, 00:14:10.360 "num_base_bdevs_discovered": 1, 00:14:10.360 "num_base_bdevs_operational": 3, 00:14:10.360 "base_bdevs_list": [ 00:14:10.360 { 00:14:10.360 "name": "BaseBdev1", 00:14:10.360 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:10.360 "is_configured": true, 00:14:10.360 "data_offset": 0, 00:14:10.360 "data_size": 65536 00:14:10.360 }, 00:14:10.360 { 00:14:10.360 "name": "BaseBdev2", 00:14:10.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.360 "is_configured": false, 00:14:10.360 "data_offset": 0, 00:14:10.360 "data_size": 0 00:14:10.360 }, 00:14:10.360 { 00:14:10.360 "name": "BaseBdev3", 00:14:10.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.360 "is_configured": false, 00:14:10.360 "data_offset": 0, 00:14:10.360 "data_size": 0 00:14:10.360 } 00:14:10.360 ] 00:14:10.360 }' 00:14:10.360 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.360 11:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.928 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.187 [2024-07-25 11:55:57.082283] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.187 BaseBdev2 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:11.187 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.446 [ 00:14:11.446 { 00:14:11.446 "name": "BaseBdev2", 00:14:11.446 "aliases": [ 00:14:11.446 "19a92804-b242-4d98-bed6-167a61111c58" 00:14:11.446 ], 00:14:11.446 "product_name": "Malloc disk", 00:14:11.446 "block_size": 512, 00:14:11.446 "num_blocks": 65536, 00:14:11.446 "uuid": "19a92804-b242-4d98-bed6-167a61111c58", 00:14:11.446 "assigned_rate_limits": { 00:14:11.446 "rw_ios_per_sec": 0, 00:14:11.446 "rw_mbytes_per_sec": 0, 00:14:11.446 "r_mbytes_per_sec": 0, 00:14:11.446 "w_mbytes_per_sec": 0 00:14:11.446 }, 00:14:11.446 "claimed": true, 00:14:11.446 "claim_type": "exclusive_write", 00:14:11.446 "zoned": false, 00:14:11.446 "supported_io_types": { 00:14:11.446 "read": true, 00:14:11.446 "write": true, 00:14:11.446 "unmap": true, 00:14:11.446 "flush": true, 00:14:11.446 "reset": true, 00:14:11.446 "nvme_admin": false, 00:14:11.446 "nvme_io": false, 00:14:11.446 "nvme_io_md": false, 00:14:11.446 "write_zeroes": true, 00:14:11.446 "zcopy": true, 00:14:11.446 "get_zone_info": false, 00:14:11.446 "zone_management": false, 00:14:11.446 "zone_append": false, 00:14:11.446 "compare": false, 00:14:11.446 "compare_and_write": false, 00:14:11.446 "abort": true, 00:14:11.446 "seek_hole": false, 00:14:11.446 "seek_data": false, 00:14:11.446 "copy": true, 00:14:11.446 "nvme_iov_md": false 00:14:11.446 }, 00:14:11.446 "memory_domains": [ 00:14:11.446 { 00:14:11.446 "dma_device_id": "system", 00:14:11.446 "dma_device_type": 1 00:14:11.446 }, 00:14:11.446 { 00:14:11.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.446 "dma_device_type": 2 00:14:11.446 } 00:14:11.446 ], 00:14:11.446 "driver_specific": {} 00:14:11.446 } 00:14:11.446 ] 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.446 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.447 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.705 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.705 "name": "Existed_Raid", 00:14:11.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.705 "strip_size_kb": 64, 00:14:11.705 "state": "configuring", 00:14:11.705 "raid_level": "concat", 00:14:11.705 "superblock": false, 00:14:11.705 "num_base_bdevs": 3, 00:14:11.705 "num_base_bdevs_discovered": 2, 00:14:11.705 "num_base_bdevs_operational": 3, 00:14:11.705 "base_bdevs_list": [ 00:14:11.705 { 00:14:11.705 "name": "BaseBdev1", 00:14:11.705 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:11.705 "is_configured": true, 00:14:11.705 "data_offset": 0, 00:14:11.705 "data_size": 65536 00:14:11.705 }, 00:14:11.705 { 00:14:11.705 "name": "BaseBdev2", 00:14:11.705 "uuid": "19a92804-b242-4d98-bed6-167a61111c58", 00:14:11.705 "is_configured": true, 00:14:11.705 "data_offset": 0, 00:14:11.705 "data_size": 65536 00:14:11.705 }, 00:14:11.705 { 00:14:11.705 "name": "BaseBdev3", 00:14:11.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.705 "is_configured": false, 00:14:11.705 "data_offset": 0, 00:14:11.705 "data_size": 0 00:14:11.705 } 00:14:11.705 ] 00:14:11.705 }' 00:14:11.705 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.705 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.271 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.529 [2024-07-25 11:55:58.433041] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.529 [2024-07-25 11:55:58.433077] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2648700 00:14:12.529 [2024-07-25 11:55:58.433085] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:12.529 [2024-07-25 11:55:58.433266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26483d0 00:14:12.529 [2024-07-25 11:55:58.433380] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2648700 00:14:12.529 [2024-07-25 11:55:58.433389] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2648700 00:14:12.529 [2024-07-25 11:55:58.433542] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.529 BaseBdev3 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.529 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.787 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.787 [ 00:14:12.787 { 00:14:12.787 "name": "BaseBdev3", 00:14:12.787 "aliases": [ 00:14:12.787 "b0b11786-9c91-400a-80a7-427d71cf79cd" 00:14:12.787 ], 00:14:12.787 "product_name": "Malloc disk", 00:14:12.787 "block_size": 512, 00:14:12.787 "num_blocks": 65536, 00:14:12.787 "uuid": "b0b11786-9c91-400a-80a7-427d71cf79cd", 00:14:12.787 "assigned_rate_limits": { 00:14:12.787 "rw_ios_per_sec": 0, 00:14:12.787 "rw_mbytes_per_sec": 0, 00:14:12.787 "r_mbytes_per_sec": 0, 00:14:12.787 "w_mbytes_per_sec": 0 00:14:12.787 }, 00:14:12.787 "claimed": true, 00:14:12.787 "claim_type": "exclusive_write", 00:14:12.787 "zoned": false, 00:14:12.787 "supported_io_types": { 00:14:12.787 "read": true, 00:14:12.787 "write": true, 00:14:12.787 "unmap": true, 00:14:12.787 "flush": true, 00:14:12.787 "reset": true, 00:14:12.787 "nvme_admin": false, 00:14:12.787 "nvme_io": false, 00:14:12.787 "nvme_io_md": false, 00:14:12.787 "write_zeroes": true, 00:14:12.787 "zcopy": true, 00:14:12.787 "get_zone_info": false, 00:14:12.787 "zone_management": false, 00:14:12.787 "zone_append": false, 00:14:12.787 "compare": false, 00:14:12.787 "compare_and_write": false, 00:14:12.787 "abort": true, 00:14:12.787 "seek_hole": false, 00:14:12.787 "seek_data": false, 00:14:12.787 "copy": true, 00:14:12.787 "nvme_iov_md": false 00:14:12.787 }, 00:14:12.787 "memory_domains": [ 00:14:12.787 { 00:14:12.787 "dma_device_id": "system", 00:14:12.787 "dma_device_type": 1 00:14:12.787 }, 00:14:12.787 { 00:14:12.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.788 "dma_device_type": 2 00:14:12.788 } 00:14:12.788 ], 00:14:12.788 "driver_specific": {} 00:14:12.788 } 00:14:12.788 ] 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.080 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.080 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.080 "name": "Existed_Raid", 00:14:13.080 "uuid": "cd8aabeb-225c-43e7-a643-f9b453877cd0", 00:14:13.080 "strip_size_kb": 64, 00:14:13.080 "state": "online", 00:14:13.080 "raid_level": "concat", 00:14:13.080 "superblock": false, 00:14:13.080 "num_base_bdevs": 3, 00:14:13.080 "num_base_bdevs_discovered": 3, 00:14:13.080 "num_base_bdevs_operational": 3, 00:14:13.080 "base_bdevs_list": [ 00:14:13.080 { 00:14:13.080 "name": "BaseBdev1", 00:14:13.080 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:13.080 "is_configured": true, 00:14:13.080 "data_offset": 0, 00:14:13.080 "data_size": 65536 00:14:13.080 }, 00:14:13.080 { 00:14:13.080 "name": "BaseBdev2", 00:14:13.081 "uuid": "19a92804-b242-4d98-bed6-167a61111c58", 00:14:13.081 "is_configured": true, 00:14:13.081 "data_offset": 0, 00:14:13.081 "data_size": 65536 00:14:13.081 }, 00:14:13.081 { 00:14:13.081 "name": "BaseBdev3", 00:14:13.081 "uuid": "b0b11786-9c91-400a-80a7-427d71cf79cd", 00:14:13.081 "is_configured": true, 00:14:13.081 "data_offset": 0, 00:14:13.081 "data_size": 65536 00:14:13.081 } 00:14:13.081 ] 00:14:13.081 }' 00:14:13.081 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.081 11:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.648 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:13.648 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:13.648 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:13.648 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:13.648 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:13.648 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:13.649 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:13.649 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:13.907 [2024-07-25 11:55:59.857118] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.907 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:13.907 "name": "Existed_Raid", 00:14:13.907 "aliases": [ 00:14:13.907 "cd8aabeb-225c-43e7-a643-f9b453877cd0" 00:14:13.907 ], 00:14:13.907 "product_name": "Raid Volume", 00:14:13.907 "block_size": 512, 00:14:13.907 "num_blocks": 196608, 00:14:13.907 "uuid": "cd8aabeb-225c-43e7-a643-f9b453877cd0", 00:14:13.907 "assigned_rate_limits": { 00:14:13.907 "rw_ios_per_sec": 0, 00:14:13.907 "rw_mbytes_per_sec": 0, 00:14:13.907 "r_mbytes_per_sec": 0, 00:14:13.907 "w_mbytes_per_sec": 0 00:14:13.907 }, 00:14:13.907 "claimed": false, 00:14:13.907 "zoned": false, 00:14:13.907 "supported_io_types": { 00:14:13.907 "read": true, 00:14:13.907 "write": true, 00:14:13.907 "unmap": true, 00:14:13.907 "flush": true, 00:14:13.907 "reset": true, 00:14:13.907 "nvme_admin": false, 00:14:13.907 "nvme_io": false, 00:14:13.907 "nvme_io_md": false, 00:14:13.907 "write_zeroes": true, 00:14:13.907 "zcopy": false, 00:14:13.907 "get_zone_info": false, 00:14:13.907 "zone_management": false, 00:14:13.907 "zone_append": false, 00:14:13.907 "compare": false, 00:14:13.907 "compare_and_write": false, 00:14:13.907 "abort": false, 00:14:13.907 "seek_hole": false, 00:14:13.907 "seek_data": false, 00:14:13.907 "copy": false, 00:14:13.907 "nvme_iov_md": false 00:14:13.907 }, 00:14:13.907 "memory_domains": [ 00:14:13.907 { 00:14:13.907 "dma_device_id": "system", 00:14:13.907 "dma_device_type": 1 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.907 "dma_device_type": 2 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "dma_device_id": "system", 00:14:13.907 "dma_device_type": 1 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.907 "dma_device_type": 2 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "dma_device_id": "system", 00:14:13.907 "dma_device_type": 1 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.907 "dma_device_type": 2 00:14:13.907 } 00:14:13.907 ], 00:14:13.907 "driver_specific": { 00:14:13.907 "raid": { 00:14:13.907 "uuid": "cd8aabeb-225c-43e7-a643-f9b453877cd0", 00:14:13.907 "strip_size_kb": 64, 00:14:13.907 "state": "online", 00:14:13.907 "raid_level": "concat", 00:14:13.907 "superblock": false, 00:14:13.907 "num_base_bdevs": 3, 00:14:13.907 "num_base_bdevs_discovered": 3, 00:14:13.907 "num_base_bdevs_operational": 3, 00:14:13.907 "base_bdevs_list": [ 00:14:13.907 { 00:14:13.907 "name": "BaseBdev1", 00:14:13.907 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:13.907 "is_configured": true, 00:14:13.907 "data_offset": 0, 00:14:13.907 "data_size": 65536 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "name": "BaseBdev2", 00:14:13.907 "uuid": "19a92804-b242-4d98-bed6-167a61111c58", 00:14:13.907 "is_configured": true, 00:14:13.907 "data_offset": 0, 00:14:13.907 "data_size": 65536 00:14:13.907 }, 00:14:13.907 { 00:14:13.907 "name": "BaseBdev3", 00:14:13.907 "uuid": "b0b11786-9c91-400a-80a7-427d71cf79cd", 00:14:13.907 "is_configured": true, 00:14:13.907 "data_offset": 0, 00:14:13.907 "data_size": 65536 00:14:13.907 } 00:14:13.907 ] 00:14:13.907 } 00:14:13.907 } 00:14:13.907 }' 00:14:13.907 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:13.907 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:13.907 BaseBdev2 00:14:13.907 BaseBdev3' 00:14:13.907 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:13.907 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:13.908 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.166 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.166 "name": "BaseBdev1", 00:14:14.166 "aliases": [ 00:14:14.166 "bef5cce6-795a-4823-8f48-27f8999d16e6" 00:14:14.166 ], 00:14:14.166 "product_name": "Malloc disk", 00:14:14.166 "block_size": 512, 00:14:14.166 "num_blocks": 65536, 00:14:14.166 "uuid": "bef5cce6-795a-4823-8f48-27f8999d16e6", 00:14:14.166 "assigned_rate_limits": { 00:14:14.166 "rw_ios_per_sec": 0, 00:14:14.166 "rw_mbytes_per_sec": 0, 00:14:14.166 "r_mbytes_per_sec": 0, 00:14:14.166 "w_mbytes_per_sec": 0 00:14:14.166 }, 00:14:14.166 "claimed": true, 00:14:14.166 "claim_type": "exclusive_write", 00:14:14.166 "zoned": false, 00:14:14.166 "supported_io_types": { 00:14:14.166 "read": true, 00:14:14.166 "write": true, 00:14:14.166 "unmap": true, 00:14:14.166 "flush": true, 00:14:14.166 "reset": true, 00:14:14.166 "nvme_admin": false, 00:14:14.166 "nvme_io": false, 00:14:14.166 "nvme_io_md": false, 00:14:14.166 "write_zeroes": true, 00:14:14.166 "zcopy": true, 00:14:14.166 "get_zone_info": false, 00:14:14.166 "zone_management": false, 00:14:14.166 "zone_append": false, 00:14:14.166 "compare": false, 00:14:14.166 "compare_and_write": false, 00:14:14.166 "abort": true, 00:14:14.166 "seek_hole": false, 00:14:14.166 "seek_data": false, 00:14:14.166 "copy": true, 00:14:14.166 "nvme_iov_md": false 00:14:14.166 }, 00:14:14.166 "memory_domains": [ 00:14:14.166 { 00:14:14.166 "dma_device_id": "system", 00:14:14.166 "dma_device_type": 1 00:14:14.166 }, 00:14:14.166 { 00:14:14.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.166 "dma_device_type": 2 00:14:14.166 } 00:14:14.166 ], 00:14:14.166 "driver_specific": {} 00:14:14.166 }' 00:14:14.166 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.166 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.166 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.166 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:14.425 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.684 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.684 "name": "BaseBdev2", 00:14:14.684 "aliases": [ 00:14:14.684 "19a92804-b242-4d98-bed6-167a61111c58" 00:14:14.684 ], 00:14:14.684 "product_name": "Malloc disk", 00:14:14.684 "block_size": 512, 00:14:14.684 "num_blocks": 65536, 00:14:14.684 "uuid": "19a92804-b242-4d98-bed6-167a61111c58", 00:14:14.684 "assigned_rate_limits": { 00:14:14.684 "rw_ios_per_sec": 0, 00:14:14.684 "rw_mbytes_per_sec": 0, 00:14:14.684 "r_mbytes_per_sec": 0, 00:14:14.684 "w_mbytes_per_sec": 0 00:14:14.684 }, 00:14:14.684 "claimed": true, 00:14:14.684 "claim_type": "exclusive_write", 00:14:14.684 "zoned": false, 00:14:14.684 "supported_io_types": { 00:14:14.684 "read": true, 00:14:14.684 "write": true, 00:14:14.684 "unmap": true, 00:14:14.684 "flush": true, 00:14:14.684 "reset": true, 00:14:14.684 "nvme_admin": false, 00:14:14.684 "nvme_io": false, 00:14:14.684 "nvme_io_md": false, 00:14:14.684 "write_zeroes": true, 00:14:14.684 "zcopy": true, 00:14:14.684 "get_zone_info": false, 00:14:14.684 "zone_management": false, 00:14:14.684 "zone_append": false, 00:14:14.684 "compare": false, 00:14:14.684 "compare_and_write": false, 00:14:14.684 "abort": true, 00:14:14.684 "seek_hole": false, 00:14:14.684 "seek_data": false, 00:14:14.684 "copy": true, 00:14:14.684 "nvme_iov_md": false 00:14:14.684 }, 00:14:14.684 "memory_domains": [ 00:14:14.684 { 00:14:14.684 "dma_device_id": "system", 00:14:14.684 "dma_device_type": 1 00:14:14.684 }, 00:14:14.684 { 00:14:14.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.684 "dma_device_type": 2 00:14:14.684 } 00:14:14.684 ], 00:14:14.684 "driver_specific": {} 00:14:14.684 }' 00:14:14.684 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.684 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.684 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.684 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.943 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.943 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.943 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.943 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.943 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.943 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.943 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.943 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.943 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:14.943 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.943 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:15.202 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.202 "name": "BaseBdev3", 00:14:15.202 "aliases": [ 00:14:15.202 "b0b11786-9c91-400a-80a7-427d71cf79cd" 00:14:15.202 ], 00:14:15.202 "product_name": "Malloc disk", 00:14:15.202 "block_size": 512, 00:14:15.202 "num_blocks": 65536, 00:14:15.202 "uuid": "b0b11786-9c91-400a-80a7-427d71cf79cd", 00:14:15.202 "assigned_rate_limits": { 00:14:15.202 "rw_ios_per_sec": 0, 00:14:15.202 "rw_mbytes_per_sec": 0, 00:14:15.202 "r_mbytes_per_sec": 0, 00:14:15.202 "w_mbytes_per_sec": 0 00:14:15.202 }, 00:14:15.202 "claimed": true, 00:14:15.202 "claim_type": "exclusive_write", 00:14:15.202 "zoned": false, 00:14:15.202 "supported_io_types": { 00:14:15.202 "read": true, 00:14:15.202 "write": true, 00:14:15.202 "unmap": true, 00:14:15.202 "flush": true, 00:14:15.202 "reset": true, 00:14:15.202 "nvme_admin": false, 00:14:15.202 "nvme_io": false, 00:14:15.202 "nvme_io_md": false, 00:14:15.202 "write_zeroes": true, 00:14:15.202 "zcopy": true, 00:14:15.202 "get_zone_info": false, 00:14:15.202 "zone_management": false, 00:14:15.202 "zone_append": false, 00:14:15.202 "compare": false, 00:14:15.202 "compare_and_write": false, 00:14:15.202 "abort": true, 00:14:15.202 "seek_hole": false, 00:14:15.202 "seek_data": false, 00:14:15.202 "copy": true, 00:14:15.202 "nvme_iov_md": false 00:14:15.202 }, 00:14:15.202 "memory_domains": [ 00:14:15.202 { 00:14:15.202 "dma_device_id": "system", 00:14:15.202 "dma_device_type": 1 00:14:15.202 }, 00:14:15.202 { 00:14:15.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.202 "dma_device_type": 2 00:14:15.202 } 00:14:15.202 ], 00:14:15.202 "driver_specific": {} 00:14:15.202 }' 00:14:15.202 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.202 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.460 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.719 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.719 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:15.719 [2024-07-25 11:56:01.757902] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.720 [2024-07-25 11:56:01.757927] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.720 [2024-07-25 11:56:01.757966] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.720 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.978 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.978 "name": "Existed_Raid", 00:14:15.978 "uuid": "cd8aabeb-225c-43e7-a643-f9b453877cd0", 00:14:15.978 "strip_size_kb": 64, 00:14:15.978 "state": "offline", 00:14:15.978 "raid_level": "concat", 00:14:15.978 "superblock": false, 00:14:15.978 "num_base_bdevs": 3, 00:14:15.978 "num_base_bdevs_discovered": 2, 00:14:15.978 "num_base_bdevs_operational": 2, 00:14:15.978 "base_bdevs_list": [ 00:14:15.978 { 00:14:15.978 "name": null, 00:14:15.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.978 "is_configured": false, 00:14:15.978 "data_offset": 0, 00:14:15.978 "data_size": 65536 00:14:15.978 }, 00:14:15.978 { 00:14:15.978 "name": "BaseBdev2", 00:14:15.978 "uuid": "19a92804-b242-4d98-bed6-167a61111c58", 00:14:15.978 "is_configured": true, 00:14:15.978 "data_offset": 0, 00:14:15.978 "data_size": 65536 00:14:15.978 }, 00:14:15.978 { 00:14:15.978 "name": "BaseBdev3", 00:14:15.978 "uuid": "b0b11786-9c91-400a-80a7-427d71cf79cd", 00:14:15.978 "is_configured": true, 00:14:15.978 "data_offset": 0, 00:14:15.978 "data_size": 65536 00:14:15.978 } 00:14:15.978 ] 00:14:15.978 }' 00:14:15.978 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.978 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.545 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:16.545 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:16.545 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.545 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:16.804 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:16.804 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.804 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:17.063 [2024-07-25 11:56:02.974043] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.063 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:17.063 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.063 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.063 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:17.322 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:17.322 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.322 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:17.322 [2024-07-25 11:56:03.437151] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.322 [2024-07-25 11:56:03.437193] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2648700 name Existed_Raid, state offline 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:17.580 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.839 BaseBdev2 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:17.839 11:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.097 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.356 [ 00:14:18.356 { 00:14:18.356 "name": "BaseBdev2", 00:14:18.356 "aliases": [ 00:14:18.356 "4e9ba135-df3c-4c0e-a2b3-4662fe74c223" 00:14:18.356 ], 00:14:18.356 "product_name": "Malloc disk", 00:14:18.356 "block_size": 512, 00:14:18.356 "num_blocks": 65536, 00:14:18.356 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:18.356 "assigned_rate_limits": { 00:14:18.356 "rw_ios_per_sec": 0, 00:14:18.356 "rw_mbytes_per_sec": 0, 00:14:18.356 "r_mbytes_per_sec": 0, 00:14:18.356 "w_mbytes_per_sec": 0 00:14:18.356 }, 00:14:18.356 "claimed": false, 00:14:18.356 "zoned": false, 00:14:18.356 "supported_io_types": { 00:14:18.356 "read": true, 00:14:18.356 "write": true, 00:14:18.356 "unmap": true, 00:14:18.356 "flush": true, 00:14:18.356 "reset": true, 00:14:18.356 "nvme_admin": false, 00:14:18.356 "nvme_io": false, 00:14:18.356 "nvme_io_md": false, 00:14:18.356 "write_zeroes": true, 00:14:18.356 "zcopy": true, 00:14:18.356 "get_zone_info": false, 00:14:18.356 "zone_management": false, 00:14:18.356 "zone_append": false, 00:14:18.356 "compare": false, 00:14:18.356 "compare_and_write": false, 00:14:18.356 "abort": true, 00:14:18.356 "seek_hole": false, 00:14:18.356 "seek_data": false, 00:14:18.356 "copy": true, 00:14:18.356 "nvme_iov_md": false 00:14:18.356 }, 00:14:18.356 "memory_domains": [ 00:14:18.356 { 00:14:18.356 "dma_device_id": "system", 00:14:18.356 "dma_device_type": 1 00:14:18.356 }, 00:14:18.356 { 00:14:18.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.356 "dma_device_type": 2 00:14:18.356 } 00:14:18.356 ], 00:14:18.356 "driver_specific": {} 00:14:18.356 } 00:14:18.356 ] 00:14:18.356 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:18.356 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:18.356 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:18.356 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.615 BaseBdev3 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.615 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.874 11:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.134 [ 00:14:19.134 { 00:14:19.134 "name": "BaseBdev3", 00:14:19.134 "aliases": [ 00:14:19.134 "fd8b9902-05da-4275-b262-be640d31c9e0" 00:14:19.134 ], 00:14:19.134 "product_name": "Malloc disk", 00:14:19.134 "block_size": 512, 00:14:19.134 "num_blocks": 65536, 00:14:19.134 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:19.134 "assigned_rate_limits": { 00:14:19.134 "rw_ios_per_sec": 0, 00:14:19.134 "rw_mbytes_per_sec": 0, 00:14:19.134 "r_mbytes_per_sec": 0, 00:14:19.134 "w_mbytes_per_sec": 0 00:14:19.134 }, 00:14:19.134 "claimed": false, 00:14:19.134 "zoned": false, 00:14:19.134 "supported_io_types": { 00:14:19.134 "read": true, 00:14:19.134 "write": true, 00:14:19.134 "unmap": true, 00:14:19.134 "flush": true, 00:14:19.134 "reset": true, 00:14:19.134 "nvme_admin": false, 00:14:19.134 "nvme_io": false, 00:14:19.134 "nvme_io_md": false, 00:14:19.134 "write_zeroes": true, 00:14:19.134 "zcopy": true, 00:14:19.134 "get_zone_info": false, 00:14:19.134 "zone_management": false, 00:14:19.134 "zone_append": false, 00:14:19.134 "compare": false, 00:14:19.134 "compare_and_write": false, 00:14:19.134 "abort": true, 00:14:19.134 "seek_hole": false, 00:14:19.134 "seek_data": false, 00:14:19.134 "copy": true, 00:14:19.134 "nvme_iov_md": false 00:14:19.134 }, 00:14:19.134 "memory_domains": [ 00:14:19.134 { 00:14:19.134 "dma_device_id": "system", 00:14:19.134 "dma_device_type": 1 00:14:19.134 }, 00:14:19.134 { 00:14:19.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.134 "dma_device_type": 2 00:14:19.134 } 00:14:19.134 ], 00:14:19.134 "driver_specific": {} 00:14:19.134 } 00:14:19.134 ] 00:14:19.134 11:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:19.134 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:19.134 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:19.134 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:19.393 [2024-07-25 11:56:05.284593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.393 [2024-07-25 11:56:05.284632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.393 [2024-07-25 11:56:05.284648] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.393 [2024-07-25 11:56:05.285881] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.393 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.652 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.652 "name": "Existed_Raid", 00:14:19.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.652 "strip_size_kb": 64, 00:14:19.652 "state": "configuring", 00:14:19.652 "raid_level": "concat", 00:14:19.652 "superblock": false, 00:14:19.652 "num_base_bdevs": 3, 00:14:19.652 "num_base_bdevs_discovered": 2, 00:14:19.652 "num_base_bdevs_operational": 3, 00:14:19.652 "base_bdevs_list": [ 00:14:19.652 { 00:14:19.652 "name": "BaseBdev1", 00:14:19.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.652 "is_configured": false, 00:14:19.652 "data_offset": 0, 00:14:19.652 "data_size": 0 00:14:19.652 }, 00:14:19.652 { 00:14:19.652 "name": "BaseBdev2", 00:14:19.652 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:19.652 "is_configured": true, 00:14:19.652 "data_offset": 0, 00:14:19.652 "data_size": 65536 00:14:19.652 }, 00:14:19.652 { 00:14:19.652 "name": "BaseBdev3", 00:14:19.652 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:19.652 "is_configured": true, 00:14:19.652 "data_offset": 0, 00:14:19.652 "data_size": 65536 00:14:19.652 } 00:14:19.652 ] 00:14:19.652 }' 00:14:19.652 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.652 11:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:20.220 [2024-07-25 11:56:06.299237] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.220 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.479 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.479 "name": "Existed_Raid", 00:14:20.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.479 "strip_size_kb": 64, 00:14:20.479 "state": "configuring", 00:14:20.479 "raid_level": "concat", 00:14:20.479 "superblock": false, 00:14:20.479 "num_base_bdevs": 3, 00:14:20.479 "num_base_bdevs_discovered": 1, 00:14:20.479 "num_base_bdevs_operational": 3, 00:14:20.479 "base_bdevs_list": [ 00:14:20.479 { 00:14:20.479 "name": "BaseBdev1", 00:14:20.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.479 "is_configured": false, 00:14:20.479 "data_offset": 0, 00:14:20.479 "data_size": 0 00:14:20.479 }, 00:14:20.479 { 00:14:20.479 "name": null, 00:14:20.479 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:20.479 "is_configured": false, 00:14:20.479 "data_offset": 0, 00:14:20.479 "data_size": 65536 00:14:20.479 }, 00:14:20.479 { 00:14:20.479 "name": "BaseBdev3", 00:14:20.480 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:20.480 "is_configured": true, 00:14:20.480 "data_offset": 0, 00:14:20.480 "data_size": 65536 00:14:20.480 } 00:14:20.480 ] 00:14:20.480 }' 00:14:20.480 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.480 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.047 11:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.047 11:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.306 11:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:21.306 11:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.565 [2024-07-25 11:56:07.549801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.565 BaseBdev1 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.565 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.824 11:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.083 [ 00:14:22.083 { 00:14:22.083 "name": "BaseBdev1", 00:14:22.083 "aliases": [ 00:14:22.083 "01661c4c-3a06-43c0-be92-2bca21f14098" 00:14:22.083 ], 00:14:22.083 "product_name": "Malloc disk", 00:14:22.083 "block_size": 512, 00:14:22.083 "num_blocks": 65536, 00:14:22.083 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:22.083 "assigned_rate_limits": { 00:14:22.083 "rw_ios_per_sec": 0, 00:14:22.083 "rw_mbytes_per_sec": 0, 00:14:22.083 "r_mbytes_per_sec": 0, 00:14:22.083 "w_mbytes_per_sec": 0 00:14:22.083 }, 00:14:22.083 "claimed": true, 00:14:22.083 "claim_type": "exclusive_write", 00:14:22.083 "zoned": false, 00:14:22.083 "supported_io_types": { 00:14:22.083 "read": true, 00:14:22.083 "write": true, 00:14:22.083 "unmap": true, 00:14:22.083 "flush": true, 00:14:22.083 "reset": true, 00:14:22.083 "nvme_admin": false, 00:14:22.083 "nvme_io": false, 00:14:22.083 "nvme_io_md": false, 00:14:22.083 "write_zeroes": true, 00:14:22.083 "zcopy": true, 00:14:22.083 "get_zone_info": false, 00:14:22.083 "zone_management": false, 00:14:22.083 "zone_append": false, 00:14:22.083 "compare": false, 00:14:22.083 "compare_and_write": false, 00:14:22.083 "abort": true, 00:14:22.083 "seek_hole": false, 00:14:22.083 "seek_data": false, 00:14:22.083 "copy": true, 00:14:22.083 "nvme_iov_md": false 00:14:22.083 }, 00:14:22.083 "memory_domains": [ 00:14:22.083 { 00:14:22.083 "dma_device_id": "system", 00:14:22.083 "dma_device_type": 1 00:14:22.083 }, 00:14:22.083 { 00:14:22.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.083 "dma_device_type": 2 00:14:22.083 } 00:14:22.083 ], 00:14:22.083 "driver_specific": {} 00:14:22.083 } 00:14:22.083 ] 00:14:22.083 11:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.084 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.342 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.342 "name": "Existed_Raid", 00:14:22.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.342 "strip_size_kb": 64, 00:14:22.342 "state": "configuring", 00:14:22.342 "raid_level": "concat", 00:14:22.343 "superblock": false, 00:14:22.343 "num_base_bdevs": 3, 00:14:22.343 "num_base_bdevs_discovered": 2, 00:14:22.343 "num_base_bdevs_operational": 3, 00:14:22.343 "base_bdevs_list": [ 00:14:22.343 { 00:14:22.343 "name": "BaseBdev1", 00:14:22.343 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:22.343 "is_configured": true, 00:14:22.343 "data_offset": 0, 00:14:22.343 "data_size": 65536 00:14:22.343 }, 00:14:22.343 { 00:14:22.343 "name": null, 00:14:22.343 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:22.343 "is_configured": false, 00:14:22.343 "data_offset": 0, 00:14:22.343 "data_size": 65536 00:14:22.343 }, 00:14:22.343 { 00:14:22.343 "name": "BaseBdev3", 00:14:22.343 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:22.343 "is_configured": true, 00:14:22.343 "data_offset": 0, 00:14:22.343 "data_size": 65536 00:14:22.343 } 00:14:22.343 ] 00:14:22.343 }' 00:14:22.343 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.343 11:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.911 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.911 11:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.170 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:23.170 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:23.170 [2024-07-25 11:56:09.242283] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.171 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.430 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.430 "name": "Existed_Raid", 00:14:23.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.430 "strip_size_kb": 64, 00:14:23.430 "state": "configuring", 00:14:23.430 "raid_level": "concat", 00:14:23.430 "superblock": false, 00:14:23.430 "num_base_bdevs": 3, 00:14:23.430 "num_base_bdevs_discovered": 1, 00:14:23.430 "num_base_bdevs_operational": 3, 00:14:23.430 "base_bdevs_list": [ 00:14:23.430 { 00:14:23.430 "name": "BaseBdev1", 00:14:23.430 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:23.430 "is_configured": true, 00:14:23.430 "data_offset": 0, 00:14:23.430 "data_size": 65536 00:14:23.430 }, 00:14:23.430 { 00:14:23.430 "name": null, 00:14:23.430 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:23.430 "is_configured": false, 00:14:23.430 "data_offset": 0, 00:14:23.430 "data_size": 65536 00:14:23.430 }, 00:14:23.430 { 00:14:23.430 "name": null, 00:14:23.430 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:23.430 "is_configured": false, 00:14:23.430 "data_offset": 0, 00:14:23.430 "data_size": 65536 00:14:23.430 } 00:14:23.430 ] 00:14:23.430 }' 00:14:23.430 11:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.430 11:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.998 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.998 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.258 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:24.258 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.517 [2024-07-25 11:56:10.413398] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.517 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.777 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.777 "name": "Existed_Raid", 00:14:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.777 "strip_size_kb": 64, 00:14:24.777 "state": "configuring", 00:14:24.777 "raid_level": "concat", 00:14:24.777 "superblock": false, 00:14:24.777 "num_base_bdevs": 3, 00:14:24.777 "num_base_bdevs_discovered": 2, 00:14:24.777 "num_base_bdevs_operational": 3, 00:14:24.777 "base_bdevs_list": [ 00:14:24.777 { 00:14:24.777 "name": "BaseBdev1", 00:14:24.777 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:24.777 "is_configured": true, 00:14:24.777 "data_offset": 0, 00:14:24.777 "data_size": 65536 00:14:24.777 }, 00:14:24.777 { 00:14:24.777 "name": null, 00:14:24.777 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:24.777 "is_configured": false, 00:14:24.777 "data_offset": 0, 00:14:24.777 "data_size": 65536 00:14:24.777 }, 00:14:24.777 { 00:14:24.777 "name": "BaseBdev3", 00:14:24.777 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:24.777 "is_configured": true, 00:14:24.777 "data_offset": 0, 00:14:24.777 "data_size": 65536 00:14:24.777 } 00:14:24.777 ] 00:14:24.777 }' 00:14:24.777 11:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.777 11:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.344 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.344 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.344 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:25.344 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:25.603 [2024-07-25 11:56:11.616577] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.603 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.862 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.863 "name": "Existed_Raid", 00:14:25.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.863 "strip_size_kb": 64, 00:14:25.863 "state": "configuring", 00:14:25.863 "raid_level": "concat", 00:14:25.863 "superblock": false, 00:14:25.863 "num_base_bdevs": 3, 00:14:25.863 "num_base_bdevs_discovered": 1, 00:14:25.863 "num_base_bdevs_operational": 3, 00:14:25.863 "base_bdevs_list": [ 00:14:25.863 { 00:14:25.863 "name": null, 00:14:25.863 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:25.863 "is_configured": false, 00:14:25.863 "data_offset": 0, 00:14:25.863 "data_size": 65536 00:14:25.863 }, 00:14:25.863 { 00:14:25.863 "name": null, 00:14:25.863 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:25.863 "is_configured": false, 00:14:25.863 "data_offset": 0, 00:14:25.863 "data_size": 65536 00:14:25.863 }, 00:14:25.863 { 00:14:25.863 "name": "BaseBdev3", 00:14:25.863 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:25.863 "is_configured": true, 00:14:25.863 "data_offset": 0, 00:14:25.863 "data_size": 65536 00:14:25.863 } 00:14:25.863 ] 00:14:25.863 }' 00:14:25.863 11:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.863 11:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.440 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.440 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.699 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:26.699 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:26.963 [2024-07-25 11:56:12.894033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.963 11:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.241 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.241 "name": "Existed_Raid", 00:14:27.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.241 "strip_size_kb": 64, 00:14:27.241 "state": "configuring", 00:14:27.241 "raid_level": "concat", 00:14:27.241 "superblock": false, 00:14:27.241 "num_base_bdevs": 3, 00:14:27.241 "num_base_bdevs_discovered": 2, 00:14:27.241 "num_base_bdevs_operational": 3, 00:14:27.241 "base_bdevs_list": [ 00:14:27.241 { 00:14:27.241 "name": null, 00:14:27.241 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:27.241 "is_configured": false, 00:14:27.241 "data_offset": 0, 00:14:27.241 "data_size": 65536 00:14:27.241 }, 00:14:27.241 { 00:14:27.241 "name": "BaseBdev2", 00:14:27.241 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:27.241 "is_configured": true, 00:14:27.241 "data_offset": 0, 00:14:27.241 "data_size": 65536 00:14:27.241 }, 00:14:27.241 { 00:14:27.241 "name": "BaseBdev3", 00:14:27.241 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:27.241 "is_configured": true, 00:14:27.241 "data_offset": 0, 00:14:27.241 "data_size": 65536 00:14:27.241 } 00:14:27.241 ] 00:14:27.241 }' 00:14:27.241 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.241 11:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.821 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.080 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:28.080 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.080 11:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:28.080 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 01661c4c-3a06-43c0-be92-2bca21f14098 00:14:28.339 [2024-07-25 11:56:14.385129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:28.339 [2024-07-25 11:56:14.385171] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2648af0 00:14:28.339 [2024-07-25 11:56:14.385179] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:28.339 [2024-07-25 11:56:14.385357] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2647f10 00:14:28.339 [2024-07-25 11:56:14.385463] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2648af0 00:14:28.339 [2024-07-25 11:56:14.385472] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2648af0 00:14:28.339 [2024-07-25 11:56:14.385622] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.339 NewBaseBdev 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.339 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.598 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:28.857 [ 00:14:28.857 { 00:14:28.857 "name": "NewBaseBdev", 00:14:28.857 "aliases": [ 00:14:28.857 "01661c4c-3a06-43c0-be92-2bca21f14098" 00:14:28.857 ], 00:14:28.857 "product_name": "Malloc disk", 00:14:28.857 "block_size": 512, 00:14:28.857 "num_blocks": 65536, 00:14:28.857 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:28.857 "assigned_rate_limits": { 00:14:28.857 "rw_ios_per_sec": 0, 00:14:28.857 "rw_mbytes_per_sec": 0, 00:14:28.857 "r_mbytes_per_sec": 0, 00:14:28.857 "w_mbytes_per_sec": 0 00:14:28.857 }, 00:14:28.857 "claimed": true, 00:14:28.857 "claim_type": "exclusive_write", 00:14:28.857 "zoned": false, 00:14:28.857 "supported_io_types": { 00:14:28.857 "read": true, 00:14:28.857 "write": true, 00:14:28.857 "unmap": true, 00:14:28.857 "flush": true, 00:14:28.857 "reset": true, 00:14:28.857 "nvme_admin": false, 00:14:28.857 "nvme_io": false, 00:14:28.857 "nvme_io_md": false, 00:14:28.857 "write_zeroes": true, 00:14:28.857 "zcopy": true, 00:14:28.857 "get_zone_info": false, 00:14:28.857 "zone_management": false, 00:14:28.857 "zone_append": false, 00:14:28.857 "compare": false, 00:14:28.857 "compare_and_write": false, 00:14:28.857 "abort": true, 00:14:28.857 "seek_hole": false, 00:14:28.857 "seek_data": false, 00:14:28.857 "copy": true, 00:14:28.857 "nvme_iov_md": false 00:14:28.857 }, 00:14:28.857 "memory_domains": [ 00:14:28.857 { 00:14:28.857 "dma_device_id": "system", 00:14:28.857 "dma_device_type": 1 00:14:28.857 }, 00:14:28.857 { 00:14:28.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.857 "dma_device_type": 2 00:14:28.857 } 00:14:28.857 ], 00:14:28.857 "driver_specific": {} 00:14:28.857 } 00:14:28.857 ] 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.857 11:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.117 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.117 "name": "Existed_Raid", 00:14:29.117 "uuid": "4800b741-1754-4651-9cd4-720189153986", 00:14:29.117 "strip_size_kb": 64, 00:14:29.117 "state": "online", 00:14:29.117 "raid_level": "concat", 00:14:29.117 "superblock": false, 00:14:29.117 "num_base_bdevs": 3, 00:14:29.117 "num_base_bdevs_discovered": 3, 00:14:29.117 "num_base_bdevs_operational": 3, 00:14:29.117 "base_bdevs_list": [ 00:14:29.117 { 00:14:29.117 "name": "NewBaseBdev", 00:14:29.117 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:29.117 "is_configured": true, 00:14:29.117 "data_offset": 0, 00:14:29.117 "data_size": 65536 00:14:29.117 }, 00:14:29.117 { 00:14:29.117 "name": "BaseBdev2", 00:14:29.117 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:29.117 "is_configured": true, 00:14:29.117 "data_offset": 0, 00:14:29.117 "data_size": 65536 00:14:29.117 }, 00:14:29.117 { 00:14:29.117 "name": "BaseBdev3", 00:14:29.117 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:29.117 "is_configured": true, 00:14:29.117 "data_offset": 0, 00:14:29.117 "data_size": 65536 00:14:29.117 } 00:14:29.117 ] 00:14:29.117 }' 00:14:29.117 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.117 11:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:29.683 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:29.942 [2024-07-25 11:56:15.821217] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.942 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:29.942 "name": "Existed_Raid", 00:14:29.942 "aliases": [ 00:14:29.942 "4800b741-1754-4651-9cd4-720189153986" 00:14:29.942 ], 00:14:29.942 "product_name": "Raid Volume", 00:14:29.942 "block_size": 512, 00:14:29.942 "num_blocks": 196608, 00:14:29.942 "uuid": "4800b741-1754-4651-9cd4-720189153986", 00:14:29.942 "assigned_rate_limits": { 00:14:29.942 "rw_ios_per_sec": 0, 00:14:29.942 "rw_mbytes_per_sec": 0, 00:14:29.942 "r_mbytes_per_sec": 0, 00:14:29.942 "w_mbytes_per_sec": 0 00:14:29.942 }, 00:14:29.942 "claimed": false, 00:14:29.942 "zoned": false, 00:14:29.942 "supported_io_types": { 00:14:29.943 "read": true, 00:14:29.943 "write": true, 00:14:29.943 "unmap": true, 00:14:29.943 "flush": true, 00:14:29.943 "reset": true, 00:14:29.943 "nvme_admin": false, 00:14:29.943 "nvme_io": false, 00:14:29.943 "nvme_io_md": false, 00:14:29.943 "write_zeroes": true, 00:14:29.943 "zcopy": false, 00:14:29.943 "get_zone_info": false, 00:14:29.943 "zone_management": false, 00:14:29.943 "zone_append": false, 00:14:29.943 "compare": false, 00:14:29.943 "compare_and_write": false, 00:14:29.943 "abort": false, 00:14:29.943 "seek_hole": false, 00:14:29.943 "seek_data": false, 00:14:29.943 "copy": false, 00:14:29.943 "nvme_iov_md": false 00:14:29.943 }, 00:14:29.943 "memory_domains": [ 00:14:29.943 { 00:14:29.943 "dma_device_id": "system", 00:14:29.943 "dma_device_type": 1 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.943 "dma_device_type": 2 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "dma_device_id": "system", 00:14:29.943 "dma_device_type": 1 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.943 "dma_device_type": 2 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "dma_device_id": "system", 00:14:29.943 "dma_device_type": 1 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.943 "dma_device_type": 2 00:14:29.943 } 00:14:29.943 ], 00:14:29.943 "driver_specific": { 00:14:29.943 "raid": { 00:14:29.943 "uuid": "4800b741-1754-4651-9cd4-720189153986", 00:14:29.943 "strip_size_kb": 64, 00:14:29.943 "state": "online", 00:14:29.943 "raid_level": "concat", 00:14:29.943 "superblock": false, 00:14:29.943 "num_base_bdevs": 3, 00:14:29.943 "num_base_bdevs_discovered": 3, 00:14:29.943 "num_base_bdevs_operational": 3, 00:14:29.943 "base_bdevs_list": [ 00:14:29.943 { 00:14:29.943 "name": "NewBaseBdev", 00:14:29.943 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 0, 00:14:29.943 "data_size": 65536 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "name": "BaseBdev2", 00:14:29.943 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 0, 00:14:29.943 "data_size": 65536 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "name": "BaseBdev3", 00:14:29.943 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 0, 00:14:29.943 "data_size": 65536 00:14:29.943 } 00:14:29.943 ] 00:14:29.943 } 00:14:29.943 } 00:14:29.943 }' 00:14:29.943 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.943 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:29.943 BaseBdev2 00:14:29.943 BaseBdev3' 00:14:29.943 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.943 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:29.943 11:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.202 "name": "NewBaseBdev", 00:14:30.202 "aliases": [ 00:14:30.202 "01661c4c-3a06-43c0-be92-2bca21f14098" 00:14:30.202 ], 00:14:30.202 "product_name": "Malloc disk", 00:14:30.202 "block_size": 512, 00:14:30.202 "num_blocks": 65536, 00:14:30.202 "uuid": "01661c4c-3a06-43c0-be92-2bca21f14098", 00:14:30.202 "assigned_rate_limits": { 00:14:30.202 "rw_ios_per_sec": 0, 00:14:30.202 "rw_mbytes_per_sec": 0, 00:14:30.202 "r_mbytes_per_sec": 0, 00:14:30.202 "w_mbytes_per_sec": 0 00:14:30.202 }, 00:14:30.202 "claimed": true, 00:14:30.202 "claim_type": "exclusive_write", 00:14:30.202 "zoned": false, 00:14:30.202 "supported_io_types": { 00:14:30.202 "read": true, 00:14:30.202 "write": true, 00:14:30.202 "unmap": true, 00:14:30.202 "flush": true, 00:14:30.202 "reset": true, 00:14:30.202 "nvme_admin": false, 00:14:30.202 "nvme_io": false, 00:14:30.202 "nvme_io_md": false, 00:14:30.202 "write_zeroes": true, 00:14:30.202 "zcopy": true, 00:14:30.202 "get_zone_info": false, 00:14:30.202 "zone_management": false, 00:14:30.202 "zone_append": false, 00:14:30.202 "compare": false, 00:14:30.202 "compare_and_write": false, 00:14:30.202 "abort": true, 00:14:30.202 "seek_hole": false, 00:14:30.202 "seek_data": false, 00:14:30.202 "copy": true, 00:14:30.202 "nvme_iov_md": false 00:14:30.202 }, 00:14:30.202 "memory_domains": [ 00:14:30.202 { 00:14:30.202 "dma_device_id": "system", 00:14:30.202 "dma_device_type": 1 00:14:30.202 }, 00:14:30.202 { 00:14:30.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.202 "dma_device_type": 2 00:14:30.202 } 00:14:30.202 ], 00:14:30.202 "driver_specific": {} 00:14:30.202 }' 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.202 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:30.461 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.719 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.719 "name": "BaseBdev2", 00:14:30.719 "aliases": [ 00:14:30.719 "4e9ba135-df3c-4c0e-a2b3-4662fe74c223" 00:14:30.719 ], 00:14:30.719 "product_name": "Malloc disk", 00:14:30.719 "block_size": 512, 00:14:30.719 "num_blocks": 65536, 00:14:30.719 "uuid": "4e9ba135-df3c-4c0e-a2b3-4662fe74c223", 00:14:30.719 "assigned_rate_limits": { 00:14:30.719 "rw_ios_per_sec": 0, 00:14:30.719 "rw_mbytes_per_sec": 0, 00:14:30.719 "r_mbytes_per_sec": 0, 00:14:30.719 "w_mbytes_per_sec": 0 00:14:30.719 }, 00:14:30.719 "claimed": true, 00:14:30.719 "claim_type": "exclusive_write", 00:14:30.719 "zoned": false, 00:14:30.719 "supported_io_types": { 00:14:30.719 "read": true, 00:14:30.719 "write": true, 00:14:30.719 "unmap": true, 00:14:30.719 "flush": true, 00:14:30.719 "reset": true, 00:14:30.719 "nvme_admin": false, 00:14:30.719 "nvme_io": false, 00:14:30.719 "nvme_io_md": false, 00:14:30.719 "write_zeroes": true, 00:14:30.719 "zcopy": true, 00:14:30.719 "get_zone_info": false, 00:14:30.719 "zone_management": false, 00:14:30.719 "zone_append": false, 00:14:30.719 "compare": false, 00:14:30.719 "compare_and_write": false, 00:14:30.719 "abort": true, 00:14:30.719 "seek_hole": false, 00:14:30.719 "seek_data": false, 00:14:30.719 "copy": true, 00:14:30.719 "nvme_iov_md": false 00:14:30.719 }, 00:14:30.719 "memory_domains": [ 00:14:30.719 { 00:14:30.719 "dma_device_id": "system", 00:14:30.719 "dma_device_type": 1 00:14:30.719 }, 00:14:30.719 { 00:14:30.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.719 "dma_device_type": 2 00:14:30.719 } 00:14:30.719 ], 00:14:30.719 "driver_specific": {} 00:14:30.719 }' 00:14:30.719 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.719 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.719 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.719 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.719 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.978 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.978 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.978 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.978 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.978 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.978 11:56:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.978 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.978 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.978 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:30.978 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.237 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.237 "name": "BaseBdev3", 00:14:31.237 "aliases": [ 00:14:31.237 "fd8b9902-05da-4275-b262-be640d31c9e0" 00:14:31.237 ], 00:14:31.237 "product_name": "Malloc disk", 00:14:31.237 "block_size": 512, 00:14:31.237 "num_blocks": 65536, 00:14:31.237 "uuid": "fd8b9902-05da-4275-b262-be640d31c9e0", 00:14:31.237 "assigned_rate_limits": { 00:14:31.237 "rw_ios_per_sec": 0, 00:14:31.237 "rw_mbytes_per_sec": 0, 00:14:31.237 "r_mbytes_per_sec": 0, 00:14:31.237 "w_mbytes_per_sec": 0 00:14:31.237 }, 00:14:31.237 "claimed": true, 00:14:31.237 "claim_type": "exclusive_write", 00:14:31.237 "zoned": false, 00:14:31.237 "supported_io_types": { 00:14:31.237 "read": true, 00:14:31.237 "write": true, 00:14:31.237 "unmap": true, 00:14:31.237 "flush": true, 00:14:31.237 "reset": true, 00:14:31.237 "nvme_admin": false, 00:14:31.237 "nvme_io": false, 00:14:31.237 "nvme_io_md": false, 00:14:31.237 "write_zeroes": true, 00:14:31.237 "zcopy": true, 00:14:31.237 "get_zone_info": false, 00:14:31.237 "zone_management": false, 00:14:31.237 "zone_append": false, 00:14:31.237 "compare": false, 00:14:31.237 "compare_and_write": false, 00:14:31.237 "abort": true, 00:14:31.237 "seek_hole": false, 00:14:31.237 "seek_data": false, 00:14:31.237 "copy": true, 00:14:31.237 "nvme_iov_md": false 00:14:31.237 }, 00:14:31.237 "memory_domains": [ 00:14:31.237 { 00:14:31.237 "dma_device_id": "system", 00:14:31.237 "dma_device_type": 1 00:14:31.237 }, 00:14:31.237 { 00:14:31.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.237 "dma_device_type": 2 00:14:31.237 } 00:14:31.237 ], 00:14:31.237 "driver_specific": {} 00:14:31.237 }' 00:14:31.237 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.237 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.237 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.237 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.495 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:31.752 [2024-07-25 11:56:17.822361] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.752 [2024-07-25 11:56:17.822386] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.752 [2024-07-25 11:56:17.822435] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.752 [2024-07-25 11:56:17.822482] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.752 [2024-07-25 11:56:17.822498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2648af0 name Existed_Raid, state offline 00:14:31.752 11:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4133160 00:14:31.752 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4133160 ']' 00:14:31.752 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4133160 00:14:31.752 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:31.752 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.752 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4133160 00:14:32.010 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.010 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.010 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4133160' 00:14:32.010 killing process with pid 4133160 00:14:32.010 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4133160 00:14:32.010 [2024-07-25 11:56:17.895211] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.010 11:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4133160 00:14:32.010 [2024-07-25 11:56:17.918271] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.010 11:56:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:32.010 00:14:32.010 real 0m26.875s 00:14:32.010 user 0m49.360s 00:14:32.010 sys 0m4.824s 00:14:32.010 11:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.010 11:56:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.010 ************************************ 00:14:32.010 END TEST raid_state_function_test 00:14:32.010 ************************************ 00:14:32.269 11:56:18 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:32.269 11:56:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:32.269 11:56:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.269 11:56:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.269 ************************************ 00:14:32.269 START TEST raid_state_function_test_sb 00:14:32.269 ************************************ 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:32.269 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4138331 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4138331' 00:14:32.270 Process raid pid: 4138331 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4138331 /var/tmp/spdk-raid.sock 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4138331 ']' 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.270 11:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.270 [2024-07-25 11:56:18.257756] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:32.270 [2024-07-25 11:56:18.257817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:32.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.270 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:32.529 [2024-07-25 11:56:18.390762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.529 [2024-07-25 11:56:18.476536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.529 [2024-07-25 11:56:18.537307] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.529 [2024-07-25 11:56:18.537341] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.098 11:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.098 11:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:33.098 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:33.357 [2024-07-25 11:56:19.363670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.357 [2024-07-25 11:56:19.363705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.357 [2024-07-25 11:56:19.363715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.357 [2024-07-25 11:56:19.363726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.357 [2024-07-25 11:56:19.363734] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.357 [2024-07-25 11:56:19.363744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.357 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.616 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.616 "name": "Existed_Raid", 00:14:33.616 "uuid": "ce8483bf-d643-45d7-813a-d29ec7f319a9", 00:14:33.616 "strip_size_kb": 64, 00:14:33.616 "state": "configuring", 00:14:33.616 "raid_level": "concat", 00:14:33.616 "superblock": true, 00:14:33.616 "num_base_bdevs": 3, 00:14:33.616 "num_base_bdevs_discovered": 0, 00:14:33.616 "num_base_bdevs_operational": 3, 00:14:33.616 "base_bdevs_list": [ 00:14:33.616 { 00:14:33.616 "name": "BaseBdev1", 00:14:33.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.616 "is_configured": false, 00:14:33.616 "data_offset": 0, 00:14:33.616 "data_size": 0 00:14:33.616 }, 00:14:33.616 { 00:14:33.616 "name": "BaseBdev2", 00:14:33.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.616 "is_configured": false, 00:14:33.616 "data_offset": 0, 00:14:33.616 "data_size": 0 00:14:33.616 }, 00:14:33.616 { 00:14:33.616 "name": "BaseBdev3", 00:14:33.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.616 "is_configured": false, 00:14:33.616 "data_offset": 0, 00:14:33.616 "data_size": 0 00:14:33.616 } 00:14:33.616 ] 00:14:33.616 }' 00:14:33.616 11:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.616 11:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.185 11:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:34.444 [2024-07-25 11:56:20.326052] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.444 [2024-07-25 11:56:20.326081] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x137bf40 name Existed_Raid, state configuring 00:14:34.444 11:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:34.444 [2024-07-25 11:56:20.554680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.444 [2024-07-25 11:56:20.554705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.444 [2024-07-25 11:56:20.554714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.444 [2024-07-25 11:56:20.554724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.444 [2024-07-25 11:56:20.554732] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.444 [2024-07-25 11:56:20.554742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.703 [2024-07-25 11:56:20.792750] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.703 BaseBdev1 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.703 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.962 11:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.221 [ 00:14:35.221 { 00:14:35.221 "name": "BaseBdev1", 00:14:35.221 "aliases": [ 00:14:35.221 "85423bcf-bad0-4d6a-82b9-3db1cf8af441" 00:14:35.221 ], 00:14:35.221 "product_name": "Malloc disk", 00:14:35.221 "block_size": 512, 00:14:35.221 "num_blocks": 65536, 00:14:35.221 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:35.221 "assigned_rate_limits": { 00:14:35.221 "rw_ios_per_sec": 0, 00:14:35.221 "rw_mbytes_per_sec": 0, 00:14:35.221 "r_mbytes_per_sec": 0, 00:14:35.221 "w_mbytes_per_sec": 0 00:14:35.221 }, 00:14:35.221 "claimed": true, 00:14:35.221 "claim_type": "exclusive_write", 00:14:35.221 "zoned": false, 00:14:35.221 "supported_io_types": { 00:14:35.221 "read": true, 00:14:35.221 "write": true, 00:14:35.221 "unmap": true, 00:14:35.221 "flush": true, 00:14:35.221 "reset": true, 00:14:35.221 "nvme_admin": false, 00:14:35.221 "nvme_io": false, 00:14:35.221 "nvme_io_md": false, 00:14:35.221 "write_zeroes": true, 00:14:35.221 "zcopy": true, 00:14:35.221 "get_zone_info": false, 00:14:35.221 "zone_management": false, 00:14:35.221 "zone_append": false, 00:14:35.221 "compare": false, 00:14:35.221 "compare_and_write": false, 00:14:35.221 "abort": true, 00:14:35.221 "seek_hole": false, 00:14:35.221 "seek_data": false, 00:14:35.221 "copy": true, 00:14:35.221 "nvme_iov_md": false 00:14:35.221 }, 00:14:35.221 "memory_domains": [ 00:14:35.221 { 00:14:35.221 "dma_device_id": "system", 00:14:35.221 "dma_device_type": 1 00:14:35.221 }, 00:14:35.221 { 00:14:35.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.221 "dma_device_type": 2 00:14:35.221 } 00:14:35.221 ], 00:14:35.221 "driver_specific": {} 00:14:35.221 } 00:14:35.221 ] 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.221 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.222 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.222 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.222 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.481 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.481 "name": "Existed_Raid", 00:14:35.481 "uuid": "a666cdc4-7ae8-45c2-8463-40f1e04db74a", 00:14:35.481 "strip_size_kb": 64, 00:14:35.481 "state": "configuring", 00:14:35.481 "raid_level": "concat", 00:14:35.481 "superblock": true, 00:14:35.481 "num_base_bdevs": 3, 00:14:35.481 "num_base_bdevs_discovered": 1, 00:14:35.481 "num_base_bdevs_operational": 3, 00:14:35.481 "base_bdevs_list": [ 00:14:35.481 { 00:14:35.481 "name": "BaseBdev1", 00:14:35.481 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:35.481 "is_configured": true, 00:14:35.481 "data_offset": 2048, 00:14:35.481 "data_size": 63488 00:14:35.481 }, 00:14:35.481 { 00:14:35.481 "name": "BaseBdev2", 00:14:35.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.481 "is_configured": false, 00:14:35.481 "data_offset": 0, 00:14:35.481 "data_size": 0 00:14:35.481 }, 00:14:35.481 { 00:14:35.481 "name": "BaseBdev3", 00:14:35.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.481 "is_configured": false, 00:14:35.481 "data_offset": 0, 00:14:35.481 "data_size": 0 00:14:35.481 } 00:14:35.481 ] 00:14:35.481 }' 00:14:35.481 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.481 11:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.048 11:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:36.048 [2024-07-25 11:56:22.104087] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.048 [2024-07-25 11:56:22.104121] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x137b810 name Existed_Raid, state configuring 00:14:36.048 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:36.307 [2024-07-25 11:56:22.332733] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.307 [2024-07-25 11:56:22.334094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.307 [2024-07-25 11:56:22.334122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.307 [2024-07-25 11:56:22.334131] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.307 [2024-07-25 11:56:22.334150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.307 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:36.307 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:36.307 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.308 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.567 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.567 "name": "Existed_Raid", 00:14:36.567 "uuid": "b0157b73-4cff-490e-99eb-472f090624e4", 00:14:36.567 "strip_size_kb": 64, 00:14:36.567 "state": "configuring", 00:14:36.567 "raid_level": "concat", 00:14:36.567 "superblock": true, 00:14:36.567 "num_base_bdevs": 3, 00:14:36.567 "num_base_bdevs_discovered": 1, 00:14:36.567 "num_base_bdevs_operational": 3, 00:14:36.567 "base_bdevs_list": [ 00:14:36.567 { 00:14:36.567 "name": "BaseBdev1", 00:14:36.567 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:36.567 "is_configured": true, 00:14:36.567 "data_offset": 2048, 00:14:36.567 "data_size": 63488 00:14:36.567 }, 00:14:36.567 { 00:14:36.567 "name": "BaseBdev2", 00:14:36.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.567 "is_configured": false, 00:14:36.567 "data_offset": 0, 00:14:36.567 "data_size": 0 00:14:36.567 }, 00:14:36.567 { 00:14:36.567 "name": "BaseBdev3", 00:14:36.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.567 "is_configured": false, 00:14:36.567 "data_offset": 0, 00:14:36.567 "data_size": 0 00:14:36.567 } 00:14:36.567 ] 00:14:36.567 }' 00:14:36.567 11:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.567 11:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.135 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.394 [2024-07-25 11:56:23.258299] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.394 BaseBdev2 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.394 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.653 [ 00:14:37.653 { 00:14:37.653 "name": "BaseBdev2", 00:14:37.653 "aliases": [ 00:14:37.653 "8b3f9788-4418-4286-bf4e-bd7a09d9dec6" 00:14:37.653 ], 00:14:37.653 "product_name": "Malloc disk", 00:14:37.653 "block_size": 512, 00:14:37.653 "num_blocks": 65536, 00:14:37.653 "uuid": "8b3f9788-4418-4286-bf4e-bd7a09d9dec6", 00:14:37.653 "assigned_rate_limits": { 00:14:37.653 "rw_ios_per_sec": 0, 00:14:37.653 "rw_mbytes_per_sec": 0, 00:14:37.653 "r_mbytes_per_sec": 0, 00:14:37.653 "w_mbytes_per_sec": 0 00:14:37.653 }, 00:14:37.653 "claimed": true, 00:14:37.653 "claim_type": "exclusive_write", 00:14:37.653 "zoned": false, 00:14:37.653 "supported_io_types": { 00:14:37.653 "read": true, 00:14:37.653 "write": true, 00:14:37.653 "unmap": true, 00:14:37.653 "flush": true, 00:14:37.653 "reset": true, 00:14:37.653 "nvme_admin": false, 00:14:37.653 "nvme_io": false, 00:14:37.653 "nvme_io_md": false, 00:14:37.653 "write_zeroes": true, 00:14:37.653 "zcopy": true, 00:14:37.653 "get_zone_info": false, 00:14:37.653 "zone_management": false, 00:14:37.653 "zone_append": false, 00:14:37.653 "compare": false, 00:14:37.653 "compare_and_write": false, 00:14:37.653 "abort": true, 00:14:37.653 "seek_hole": false, 00:14:37.653 "seek_data": false, 00:14:37.653 "copy": true, 00:14:37.653 "nvme_iov_md": false 00:14:37.653 }, 00:14:37.653 "memory_domains": [ 00:14:37.653 { 00:14:37.653 "dma_device_id": "system", 00:14:37.653 "dma_device_type": 1 00:14:37.653 }, 00:14:37.653 { 00:14:37.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.653 "dma_device_type": 2 00:14:37.653 } 00:14:37.653 ], 00:14:37.653 "driver_specific": {} 00:14:37.653 } 00:14:37.653 ] 00:14:37.653 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:37.653 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:37.653 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.654 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.913 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.913 "name": "Existed_Raid", 00:14:37.913 "uuid": "b0157b73-4cff-490e-99eb-472f090624e4", 00:14:37.913 "strip_size_kb": 64, 00:14:37.913 "state": "configuring", 00:14:37.913 "raid_level": "concat", 00:14:37.913 "superblock": true, 00:14:37.913 "num_base_bdevs": 3, 00:14:37.913 "num_base_bdevs_discovered": 2, 00:14:37.913 "num_base_bdevs_operational": 3, 00:14:37.913 "base_bdevs_list": [ 00:14:37.913 { 00:14:37.913 "name": "BaseBdev1", 00:14:37.913 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:37.913 "is_configured": true, 00:14:37.913 "data_offset": 2048, 00:14:37.913 "data_size": 63488 00:14:37.913 }, 00:14:37.913 { 00:14:37.913 "name": "BaseBdev2", 00:14:37.913 "uuid": "8b3f9788-4418-4286-bf4e-bd7a09d9dec6", 00:14:37.913 "is_configured": true, 00:14:37.913 "data_offset": 2048, 00:14:37.913 "data_size": 63488 00:14:37.913 }, 00:14:37.913 { 00:14:37.913 "name": "BaseBdev3", 00:14:37.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.913 "is_configured": false, 00:14:37.913 "data_offset": 0, 00:14:37.913 "data_size": 0 00:14:37.913 } 00:14:37.913 ] 00:14:37.913 }' 00:14:37.913 11:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.913 11:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.481 11:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.740 [2024-07-25 11:56:24.737417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.740 [2024-07-25 11:56:24.737557] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x137c700 00:14:38.740 [2024-07-25 11:56:24.737570] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.740 [2024-07-25 11:56:24.737732] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x137c3d0 00:14:38.740 [2024-07-25 11:56:24.737844] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x137c700 00:14:38.740 [2024-07-25 11:56:24.737853] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x137c700 00:14:38.740 [2024-07-25 11:56:24.737937] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.740 BaseBdev3 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:38.740 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.999 11:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.258 [ 00:14:39.258 { 00:14:39.258 "name": "BaseBdev3", 00:14:39.258 "aliases": [ 00:14:39.258 "bb7bc6a9-8320-4b87-bcf5-51cf3535d671" 00:14:39.258 ], 00:14:39.258 "product_name": "Malloc disk", 00:14:39.258 "block_size": 512, 00:14:39.258 "num_blocks": 65536, 00:14:39.258 "uuid": "bb7bc6a9-8320-4b87-bcf5-51cf3535d671", 00:14:39.258 "assigned_rate_limits": { 00:14:39.258 "rw_ios_per_sec": 0, 00:14:39.258 "rw_mbytes_per_sec": 0, 00:14:39.258 "r_mbytes_per_sec": 0, 00:14:39.258 "w_mbytes_per_sec": 0 00:14:39.258 }, 00:14:39.258 "claimed": true, 00:14:39.258 "claim_type": "exclusive_write", 00:14:39.258 "zoned": false, 00:14:39.258 "supported_io_types": { 00:14:39.258 "read": true, 00:14:39.258 "write": true, 00:14:39.258 "unmap": true, 00:14:39.258 "flush": true, 00:14:39.258 "reset": true, 00:14:39.258 "nvme_admin": false, 00:14:39.258 "nvme_io": false, 00:14:39.258 "nvme_io_md": false, 00:14:39.258 "write_zeroes": true, 00:14:39.258 "zcopy": true, 00:14:39.258 "get_zone_info": false, 00:14:39.258 "zone_management": false, 00:14:39.258 "zone_append": false, 00:14:39.258 "compare": false, 00:14:39.258 "compare_and_write": false, 00:14:39.258 "abort": true, 00:14:39.258 "seek_hole": false, 00:14:39.258 "seek_data": false, 00:14:39.258 "copy": true, 00:14:39.258 "nvme_iov_md": false 00:14:39.258 }, 00:14:39.258 "memory_domains": [ 00:14:39.258 { 00:14:39.258 "dma_device_id": "system", 00:14:39.258 "dma_device_type": 1 00:14:39.258 }, 00:14:39.258 { 00:14:39.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.258 "dma_device_type": 2 00:14:39.258 } 00:14:39.258 ], 00:14:39.258 "driver_specific": {} 00:14:39.258 } 00:14:39.258 ] 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.258 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.517 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.517 "name": "Existed_Raid", 00:14:39.517 "uuid": "b0157b73-4cff-490e-99eb-472f090624e4", 00:14:39.517 "strip_size_kb": 64, 00:14:39.517 "state": "online", 00:14:39.517 "raid_level": "concat", 00:14:39.517 "superblock": true, 00:14:39.517 "num_base_bdevs": 3, 00:14:39.517 "num_base_bdevs_discovered": 3, 00:14:39.517 "num_base_bdevs_operational": 3, 00:14:39.517 "base_bdevs_list": [ 00:14:39.517 { 00:14:39.517 "name": "BaseBdev1", 00:14:39.517 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:39.517 "is_configured": true, 00:14:39.517 "data_offset": 2048, 00:14:39.517 "data_size": 63488 00:14:39.517 }, 00:14:39.517 { 00:14:39.517 "name": "BaseBdev2", 00:14:39.517 "uuid": "8b3f9788-4418-4286-bf4e-bd7a09d9dec6", 00:14:39.517 "is_configured": true, 00:14:39.517 "data_offset": 2048, 00:14:39.517 "data_size": 63488 00:14:39.517 }, 00:14:39.517 { 00:14:39.517 "name": "BaseBdev3", 00:14:39.517 "uuid": "bb7bc6a9-8320-4b87-bcf5-51cf3535d671", 00:14:39.517 "is_configured": true, 00:14:39.517 "data_offset": 2048, 00:14:39.517 "data_size": 63488 00:14:39.517 } 00:14:39.517 ] 00:14:39.517 }' 00:14:39.517 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.517 11:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:40.086 11:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:40.086 [2024-07-25 11:56:26.197545] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.345 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:40.345 "name": "Existed_Raid", 00:14:40.345 "aliases": [ 00:14:40.345 "b0157b73-4cff-490e-99eb-472f090624e4" 00:14:40.345 ], 00:14:40.345 "product_name": "Raid Volume", 00:14:40.345 "block_size": 512, 00:14:40.345 "num_blocks": 190464, 00:14:40.345 "uuid": "b0157b73-4cff-490e-99eb-472f090624e4", 00:14:40.345 "assigned_rate_limits": { 00:14:40.345 "rw_ios_per_sec": 0, 00:14:40.345 "rw_mbytes_per_sec": 0, 00:14:40.345 "r_mbytes_per_sec": 0, 00:14:40.345 "w_mbytes_per_sec": 0 00:14:40.345 }, 00:14:40.345 "claimed": false, 00:14:40.345 "zoned": false, 00:14:40.345 "supported_io_types": { 00:14:40.345 "read": true, 00:14:40.345 "write": true, 00:14:40.345 "unmap": true, 00:14:40.345 "flush": true, 00:14:40.345 "reset": true, 00:14:40.345 "nvme_admin": false, 00:14:40.345 "nvme_io": false, 00:14:40.345 "nvme_io_md": false, 00:14:40.345 "write_zeroes": true, 00:14:40.345 "zcopy": false, 00:14:40.345 "get_zone_info": false, 00:14:40.345 "zone_management": false, 00:14:40.345 "zone_append": false, 00:14:40.345 "compare": false, 00:14:40.345 "compare_and_write": false, 00:14:40.345 "abort": false, 00:14:40.345 "seek_hole": false, 00:14:40.345 "seek_data": false, 00:14:40.345 "copy": false, 00:14:40.345 "nvme_iov_md": false 00:14:40.345 }, 00:14:40.345 "memory_domains": [ 00:14:40.345 { 00:14:40.345 "dma_device_id": "system", 00:14:40.345 "dma_device_type": 1 00:14:40.345 }, 00:14:40.345 { 00:14:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.345 "dma_device_type": 2 00:14:40.345 }, 00:14:40.345 { 00:14:40.345 "dma_device_id": "system", 00:14:40.345 "dma_device_type": 1 00:14:40.345 }, 00:14:40.345 { 00:14:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.345 "dma_device_type": 2 00:14:40.345 }, 00:14:40.345 { 00:14:40.345 "dma_device_id": "system", 00:14:40.345 "dma_device_type": 1 00:14:40.345 }, 00:14:40.345 { 00:14:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.345 "dma_device_type": 2 00:14:40.345 } 00:14:40.345 ], 00:14:40.345 "driver_specific": { 00:14:40.345 "raid": { 00:14:40.345 "uuid": "b0157b73-4cff-490e-99eb-472f090624e4", 00:14:40.345 "strip_size_kb": 64, 00:14:40.345 "state": "online", 00:14:40.345 "raid_level": "concat", 00:14:40.345 "superblock": true, 00:14:40.345 "num_base_bdevs": 3, 00:14:40.346 "num_base_bdevs_discovered": 3, 00:14:40.346 "num_base_bdevs_operational": 3, 00:14:40.346 "base_bdevs_list": [ 00:14:40.346 { 00:14:40.346 "name": "BaseBdev1", 00:14:40.346 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:40.346 "is_configured": true, 00:14:40.346 "data_offset": 2048, 00:14:40.346 "data_size": 63488 00:14:40.346 }, 00:14:40.346 { 00:14:40.346 "name": "BaseBdev2", 00:14:40.346 "uuid": "8b3f9788-4418-4286-bf4e-bd7a09d9dec6", 00:14:40.346 "is_configured": true, 00:14:40.346 "data_offset": 2048, 00:14:40.346 "data_size": 63488 00:14:40.346 }, 00:14:40.346 { 00:14:40.346 "name": "BaseBdev3", 00:14:40.346 "uuid": "bb7bc6a9-8320-4b87-bcf5-51cf3535d671", 00:14:40.346 "is_configured": true, 00:14:40.346 "data_offset": 2048, 00:14:40.346 "data_size": 63488 00:14:40.346 } 00:14:40.346 ] 00:14:40.346 } 00:14:40.346 } 00:14:40.346 }' 00:14:40.346 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.346 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:40.346 BaseBdev2 00:14:40.346 BaseBdev3' 00:14:40.346 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:40.346 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:40.346 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:40.606 "name": "BaseBdev1", 00:14:40.606 "aliases": [ 00:14:40.606 "85423bcf-bad0-4d6a-82b9-3db1cf8af441" 00:14:40.606 ], 00:14:40.606 "product_name": "Malloc disk", 00:14:40.606 "block_size": 512, 00:14:40.606 "num_blocks": 65536, 00:14:40.606 "uuid": "85423bcf-bad0-4d6a-82b9-3db1cf8af441", 00:14:40.606 "assigned_rate_limits": { 00:14:40.606 "rw_ios_per_sec": 0, 00:14:40.606 "rw_mbytes_per_sec": 0, 00:14:40.606 "r_mbytes_per_sec": 0, 00:14:40.606 "w_mbytes_per_sec": 0 00:14:40.606 }, 00:14:40.606 "claimed": true, 00:14:40.606 "claim_type": "exclusive_write", 00:14:40.606 "zoned": false, 00:14:40.606 "supported_io_types": { 00:14:40.606 "read": true, 00:14:40.606 "write": true, 00:14:40.606 "unmap": true, 00:14:40.606 "flush": true, 00:14:40.606 "reset": true, 00:14:40.606 "nvme_admin": false, 00:14:40.606 "nvme_io": false, 00:14:40.606 "nvme_io_md": false, 00:14:40.606 "write_zeroes": true, 00:14:40.606 "zcopy": true, 00:14:40.606 "get_zone_info": false, 00:14:40.606 "zone_management": false, 00:14:40.606 "zone_append": false, 00:14:40.606 "compare": false, 00:14:40.606 "compare_and_write": false, 00:14:40.606 "abort": true, 00:14:40.606 "seek_hole": false, 00:14:40.606 "seek_data": false, 00:14:40.606 "copy": true, 00:14:40.606 "nvme_iov_md": false 00:14:40.606 }, 00:14:40.606 "memory_domains": [ 00:14:40.606 { 00:14:40.606 "dma_device_id": "system", 00:14:40.606 "dma_device_type": 1 00:14:40.606 }, 00:14:40.606 { 00:14:40.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.606 "dma_device_type": 2 00:14:40.606 } 00:14:40.606 ], 00:14:40.606 "driver_specific": {} 00:14:40.606 }' 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.606 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.865 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:40.865 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:40.865 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:40.865 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:40.865 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:40.865 "name": "BaseBdev2", 00:14:40.865 "aliases": [ 00:14:40.865 "8b3f9788-4418-4286-bf4e-bd7a09d9dec6" 00:14:40.865 ], 00:14:40.865 "product_name": "Malloc disk", 00:14:40.865 "block_size": 512, 00:14:40.865 "num_blocks": 65536, 00:14:40.865 "uuid": "8b3f9788-4418-4286-bf4e-bd7a09d9dec6", 00:14:40.865 "assigned_rate_limits": { 00:14:40.865 "rw_ios_per_sec": 0, 00:14:40.865 "rw_mbytes_per_sec": 0, 00:14:40.865 "r_mbytes_per_sec": 0, 00:14:40.865 "w_mbytes_per_sec": 0 00:14:40.865 }, 00:14:40.865 "claimed": true, 00:14:40.865 "claim_type": "exclusive_write", 00:14:40.865 "zoned": false, 00:14:40.865 "supported_io_types": { 00:14:40.865 "read": true, 00:14:40.865 "write": true, 00:14:40.865 "unmap": true, 00:14:40.865 "flush": true, 00:14:40.865 "reset": true, 00:14:40.865 "nvme_admin": false, 00:14:40.865 "nvme_io": false, 00:14:40.865 "nvme_io_md": false, 00:14:40.865 "write_zeroes": true, 00:14:40.865 "zcopy": true, 00:14:40.865 "get_zone_info": false, 00:14:40.865 "zone_management": false, 00:14:40.865 "zone_append": false, 00:14:40.865 "compare": false, 00:14:40.865 "compare_and_write": false, 00:14:40.865 "abort": true, 00:14:40.865 "seek_hole": false, 00:14:40.865 "seek_data": false, 00:14:40.865 "copy": true, 00:14:40.865 "nvme_iov_md": false 00:14:40.865 }, 00:14:40.865 "memory_domains": [ 00:14:40.865 { 00:14:40.865 "dma_device_id": "system", 00:14:40.865 "dma_device_type": 1 00:14:40.865 }, 00:14:40.865 { 00:14:40.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.865 "dma_device_type": 2 00:14:40.865 } 00:14:40.865 ], 00:14:40.865 "driver_specific": {} 00:14:40.865 }' 00:14:40.865 11:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.128 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:41.455 "name": "BaseBdev3", 00:14:41.455 "aliases": [ 00:14:41.455 "bb7bc6a9-8320-4b87-bcf5-51cf3535d671" 00:14:41.455 ], 00:14:41.455 "product_name": "Malloc disk", 00:14:41.455 "block_size": 512, 00:14:41.455 "num_blocks": 65536, 00:14:41.455 "uuid": "bb7bc6a9-8320-4b87-bcf5-51cf3535d671", 00:14:41.455 "assigned_rate_limits": { 00:14:41.455 "rw_ios_per_sec": 0, 00:14:41.455 "rw_mbytes_per_sec": 0, 00:14:41.455 "r_mbytes_per_sec": 0, 00:14:41.455 "w_mbytes_per_sec": 0 00:14:41.455 }, 00:14:41.455 "claimed": true, 00:14:41.455 "claim_type": "exclusive_write", 00:14:41.455 "zoned": false, 00:14:41.455 "supported_io_types": { 00:14:41.455 "read": true, 00:14:41.455 "write": true, 00:14:41.455 "unmap": true, 00:14:41.455 "flush": true, 00:14:41.455 "reset": true, 00:14:41.455 "nvme_admin": false, 00:14:41.455 "nvme_io": false, 00:14:41.455 "nvme_io_md": false, 00:14:41.455 "write_zeroes": true, 00:14:41.455 "zcopy": true, 00:14:41.455 "get_zone_info": false, 00:14:41.455 "zone_management": false, 00:14:41.455 "zone_append": false, 00:14:41.455 "compare": false, 00:14:41.455 "compare_and_write": false, 00:14:41.455 "abort": true, 00:14:41.455 "seek_hole": false, 00:14:41.455 "seek_data": false, 00:14:41.455 "copy": true, 00:14:41.455 "nvme_iov_md": false 00:14:41.455 }, 00:14:41.455 "memory_domains": [ 00:14:41.455 { 00:14:41.455 "dma_device_id": "system", 00:14:41.455 "dma_device_type": 1 00:14:41.455 }, 00:14:41.455 { 00:14:41.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.455 "dma_device_type": 2 00:14:41.455 } 00:14:41.455 ], 00:14:41.455 "driver_specific": {} 00:14:41.455 }' 00:14:41.455 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.713 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.971 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.972 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.972 11:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:41.972 [2024-07-25 11:56:28.090326] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.972 [2024-07-25 11:56:28.090348] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.972 [2024-07-25 11:56:28.090385] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.229 "name": "Existed_Raid", 00:14:42.229 "uuid": "b0157b73-4cff-490e-99eb-472f090624e4", 00:14:42.229 "strip_size_kb": 64, 00:14:42.229 "state": "offline", 00:14:42.229 "raid_level": "concat", 00:14:42.229 "superblock": true, 00:14:42.229 "num_base_bdevs": 3, 00:14:42.229 "num_base_bdevs_discovered": 2, 00:14:42.229 "num_base_bdevs_operational": 2, 00:14:42.229 "base_bdevs_list": [ 00:14:42.229 { 00:14:42.229 "name": null, 00:14:42.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.229 "is_configured": false, 00:14:42.229 "data_offset": 2048, 00:14:42.229 "data_size": 63488 00:14:42.229 }, 00:14:42.229 { 00:14:42.229 "name": "BaseBdev2", 00:14:42.229 "uuid": "8b3f9788-4418-4286-bf4e-bd7a09d9dec6", 00:14:42.229 "is_configured": true, 00:14:42.229 "data_offset": 2048, 00:14:42.229 "data_size": 63488 00:14:42.229 }, 00:14:42.229 { 00:14:42.229 "name": "BaseBdev3", 00:14:42.229 "uuid": "bb7bc6a9-8320-4b87-bcf5-51cf3535d671", 00:14:42.229 "is_configured": true, 00:14:42.229 "data_offset": 2048, 00:14:42.229 "data_size": 63488 00:14:42.229 } 00:14:42.229 ] 00:14:42.229 }' 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.229 11:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.794 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:42.794 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:42.794 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:42.794 11:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.052 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:43.052 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.052 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:43.618 [2024-07-25 11:56:29.595271] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.618 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:43.618 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:43.618 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.618 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:43.877 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:43.877 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.877 11:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:44.443 [2024-07-25 11:56:30.339358] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.443 [2024-07-25 11:56:30.339400] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x137c700 name Existed_Raid, state offline 00:14:44.443 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:44.443 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:44.443 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.443 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.702 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:44.702 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:44.702 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:44.702 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:44.702 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:44.702 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.960 BaseBdev2 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:44.960 11:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:44.960 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.218 [ 00:14:45.218 { 00:14:45.218 "name": "BaseBdev2", 00:14:45.218 "aliases": [ 00:14:45.218 "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f" 00:14:45.218 ], 00:14:45.218 "product_name": "Malloc disk", 00:14:45.218 "block_size": 512, 00:14:45.218 "num_blocks": 65536, 00:14:45.218 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:45.218 "assigned_rate_limits": { 00:14:45.218 "rw_ios_per_sec": 0, 00:14:45.218 "rw_mbytes_per_sec": 0, 00:14:45.218 "r_mbytes_per_sec": 0, 00:14:45.218 "w_mbytes_per_sec": 0 00:14:45.218 }, 00:14:45.218 "claimed": false, 00:14:45.218 "zoned": false, 00:14:45.218 "supported_io_types": { 00:14:45.218 "read": true, 00:14:45.218 "write": true, 00:14:45.218 "unmap": true, 00:14:45.218 "flush": true, 00:14:45.218 "reset": true, 00:14:45.218 "nvme_admin": false, 00:14:45.218 "nvme_io": false, 00:14:45.218 "nvme_io_md": false, 00:14:45.218 "write_zeroes": true, 00:14:45.218 "zcopy": true, 00:14:45.218 "get_zone_info": false, 00:14:45.218 "zone_management": false, 00:14:45.218 "zone_append": false, 00:14:45.218 "compare": false, 00:14:45.218 "compare_and_write": false, 00:14:45.218 "abort": true, 00:14:45.218 "seek_hole": false, 00:14:45.218 "seek_data": false, 00:14:45.218 "copy": true, 00:14:45.218 "nvme_iov_md": false 00:14:45.218 }, 00:14:45.218 "memory_domains": [ 00:14:45.218 { 00:14:45.218 "dma_device_id": "system", 00:14:45.218 "dma_device_type": 1 00:14:45.218 }, 00:14:45.218 { 00:14:45.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.218 "dma_device_type": 2 00:14:45.218 } 00:14:45.218 ], 00:14:45.218 "driver_specific": {} 00:14:45.218 } 00:14:45.218 ] 00:14:45.218 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:45.218 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:45.218 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:45.218 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:45.476 BaseBdev3 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.476 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.734 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:45.992 [ 00:14:45.992 { 00:14:45.992 "name": "BaseBdev3", 00:14:45.992 "aliases": [ 00:14:45.992 "7cf35ec0-edfe-4857-a354-02f946c287b4" 00:14:45.992 ], 00:14:45.992 "product_name": "Malloc disk", 00:14:45.992 "block_size": 512, 00:14:45.992 "num_blocks": 65536, 00:14:45.992 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:45.992 "assigned_rate_limits": { 00:14:45.992 "rw_ios_per_sec": 0, 00:14:45.992 "rw_mbytes_per_sec": 0, 00:14:45.992 "r_mbytes_per_sec": 0, 00:14:45.992 "w_mbytes_per_sec": 0 00:14:45.992 }, 00:14:45.992 "claimed": false, 00:14:45.992 "zoned": false, 00:14:45.992 "supported_io_types": { 00:14:45.992 "read": true, 00:14:45.992 "write": true, 00:14:45.992 "unmap": true, 00:14:45.992 "flush": true, 00:14:45.992 "reset": true, 00:14:45.992 "nvme_admin": false, 00:14:45.992 "nvme_io": false, 00:14:45.992 "nvme_io_md": false, 00:14:45.992 "write_zeroes": true, 00:14:45.992 "zcopy": true, 00:14:45.992 "get_zone_info": false, 00:14:45.992 "zone_management": false, 00:14:45.992 "zone_append": false, 00:14:45.992 "compare": false, 00:14:45.992 "compare_and_write": false, 00:14:45.992 "abort": true, 00:14:45.992 "seek_hole": false, 00:14:45.992 "seek_data": false, 00:14:45.992 "copy": true, 00:14:45.992 "nvme_iov_md": false 00:14:45.992 }, 00:14:45.992 "memory_domains": [ 00:14:45.992 { 00:14:45.992 "dma_device_id": "system", 00:14:45.992 "dma_device_type": 1 00:14:45.992 }, 00:14:45.992 { 00:14:45.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.992 "dma_device_type": 2 00:14:45.992 } 00:14:45.992 ], 00:14:45.992 "driver_specific": {} 00:14:45.992 } 00:14:45.992 ] 00:14:45.992 11:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:45.992 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:45.992 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:45.992 11:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:46.251 [2024-07-25 11:56:32.178498] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.251 [2024-07-25 11:56:32.178534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.251 [2024-07-25 11:56:32.178551] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.251 [2024-07-25 11:56:32.179773] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.251 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.509 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.509 "name": "Existed_Raid", 00:14:46.509 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:46.509 "strip_size_kb": 64, 00:14:46.509 "state": "configuring", 00:14:46.509 "raid_level": "concat", 00:14:46.509 "superblock": true, 00:14:46.509 "num_base_bdevs": 3, 00:14:46.509 "num_base_bdevs_discovered": 2, 00:14:46.509 "num_base_bdevs_operational": 3, 00:14:46.509 "base_bdevs_list": [ 00:14:46.509 { 00:14:46.509 "name": "BaseBdev1", 00:14:46.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.509 "is_configured": false, 00:14:46.509 "data_offset": 0, 00:14:46.509 "data_size": 0 00:14:46.509 }, 00:14:46.509 { 00:14:46.509 "name": "BaseBdev2", 00:14:46.509 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:46.509 "is_configured": true, 00:14:46.509 "data_offset": 2048, 00:14:46.509 "data_size": 63488 00:14:46.509 }, 00:14:46.509 { 00:14:46.509 "name": "BaseBdev3", 00:14:46.509 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:46.509 "is_configured": true, 00:14:46.509 "data_offset": 2048, 00:14:46.509 "data_size": 63488 00:14:46.509 } 00:14:46.509 ] 00:14:46.509 }' 00:14:46.509 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.509 11:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.075 11:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:47.333 [2024-07-25 11:56:33.201153] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.333 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.590 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.590 "name": "Existed_Raid", 00:14:47.590 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:47.590 "strip_size_kb": 64, 00:14:47.590 "state": "configuring", 00:14:47.590 "raid_level": "concat", 00:14:47.590 "superblock": true, 00:14:47.590 "num_base_bdevs": 3, 00:14:47.590 "num_base_bdevs_discovered": 1, 00:14:47.590 "num_base_bdevs_operational": 3, 00:14:47.590 "base_bdevs_list": [ 00:14:47.590 { 00:14:47.590 "name": "BaseBdev1", 00:14:47.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.590 "is_configured": false, 00:14:47.590 "data_offset": 0, 00:14:47.590 "data_size": 0 00:14:47.590 }, 00:14:47.590 { 00:14:47.590 "name": null, 00:14:47.590 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:47.590 "is_configured": false, 00:14:47.590 "data_offset": 2048, 00:14:47.590 "data_size": 63488 00:14:47.591 }, 00:14:47.591 { 00:14:47.591 "name": "BaseBdev3", 00:14:47.591 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:47.591 "is_configured": true, 00:14:47.591 "data_offset": 2048, 00:14:47.591 "data_size": 63488 00:14:47.591 } 00:14:47.591 ] 00:14:47.591 }' 00:14:47.591 11:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.591 11:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.157 11:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.157 11:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.157 11:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:48.157 11:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.722 [2024-07-25 11:56:34.708411] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.722 BaseBdev1 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.722 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.980 11:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.545 [ 00:14:49.545 { 00:14:49.545 "name": "BaseBdev1", 00:14:49.545 "aliases": [ 00:14:49.545 "4dab6637-2a3f-4d92-8284-bbe78770b954" 00:14:49.545 ], 00:14:49.545 "product_name": "Malloc disk", 00:14:49.545 "block_size": 512, 00:14:49.545 "num_blocks": 65536, 00:14:49.545 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:49.545 "assigned_rate_limits": { 00:14:49.545 "rw_ios_per_sec": 0, 00:14:49.545 "rw_mbytes_per_sec": 0, 00:14:49.545 "r_mbytes_per_sec": 0, 00:14:49.545 "w_mbytes_per_sec": 0 00:14:49.545 }, 00:14:49.545 "claimed": true, 00:14:49.545 "claim_type": "exclusive_write", 00:14:49.545 "zoned": false, 00:14:49.545 "supported_io_types": { 00:14:49.545 "read": true, 00:14:49.545 "write": true, 00:14:49.545 "unmap": true, 00:14:49.545 "flush": true, 00:14:49.545 "reset": true, 00:14:49.545 "nvme_admin": false, 00:14:49.545 "nvme_io": false, 00:14:49.545 "nvme_io_md": false, 00:14:49.545 "write_zeroes": true, 00:14:49.545 "zcopy": true, 00:14:49.545 "get_zone_info": false, 00:14:49.545 "zone_management": false, 00:14:49.545 "zone_append": false, 00:14:49.545 "compare": false, 00:14:49.545 "compare_and_write": false, 00:14:49.545 "abort": true, 00:14:49.545 "seek_hole": false, 00:14:49.545 "seek_data": false, 00:14:49.545 "copy": true, 00:14:49.545 "nvme_iov_md": false 00:14:49.545 }, 00:14:49.545 "memory_domains": [ 00:14:49.545 { 00:14:49.545 "dma_device_id": "system", 00:14:49.545 "dma_device_type": 1 00:14:49.545 }, 00:14:49.545 { 00:14:49.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.545 "dma_device_type": 2 00:14:49.545 } 00:14:49.545 ], 00:14:49.545 "driver_specific": {} 00:14:49.546 } 00:14:49.546 ] 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.546 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.804 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.804 "name": "Existed_Raid", 00:14:49.804 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:49.804 "strip_size_kb": 64, 00:14:49.804 "state": "configuring", 00:14:49.804 "raid_level": "concat", 00:14:49.804 "superblock": true, 00:14:49.804 "num_base_bdevs": 3, 00:14:49.804 "num_base_bdevs_discovered": 2, 00:14:49.804 "num_base_bdevs_operational": 3, 00:14:49.804 "base_bdevs_list": [ 00:14:49.804 { 00:14:49.804 "name": "BaseBdev1", 00:14:49.804 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:49.804 "is_configured": true, 00:14:49.804 "data_offset": 2048, 00:14:49.804 "data_size": 63488 00:14:49.804 }, 00:14:49.804 { 00:14:49.804 "name": null, 00:14:49.804 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:49.804 "is_configured": false, 00:14:49.804 "data_offset": 2048, 00:14:49.804 "data_size": 63488 00:14:49.804 }, 00:14:49.804 { 00:14:49.804 "name": "BaseBdev3", 00:14:49.804 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:49.804 "is_configured": true, 00:14:49.804 "data_offset": 2048, 00:14:49.804 "data_size": 63488 00:14:49.804 } 00:14:49.804 ] 00:14:49.804 }' 00:14:49.804 11:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.804 11:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.370 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.370 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.370 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:50.370 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:50.629 [2024-07-25 11:56:36.605451] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.629 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.886 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.886 "name": "Existed_Raid", 00:14:50.886 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:50.886 "strip_size_kb": 64, 00:14:50.886 "state": "configuring", 00:14:50.886 "raid_level": "concat", 00:14:50.886 "superblock": true, 00:14:50.886 "num_base_bdevs": 3, 00:14:50.886 "num_base_bdevs_discovered": 1, 00:14:50.886 "num_base_bdevs_operational": 3, 00:14:50.886 "base_bdevs_list": [ 00:14:50.886 { 00:14:50.886 "name": "BaseBdev1", 00:14:50.886 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:50.886 "is_configured": true, 00:14:50.886 "data_offset": 2048, 00:14:50.886 "data_size": 63488 00:14:50.886 }, 00:14:50.886 { 00:14:50.886 "name": null, 00:14:50.886 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:50.886 "is_configured": false, 00:14:50.886 "data_offset": 2048, 00:14:50.886 "data_size": 63488 00:14:50.887 }, 00:14:50.887 { 00:14:50.887 "name": null, 00:14:50.887 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:50.887 "is_configured": false, 00:14:50.887 "data_offset": 2048, 00:14:50.887 "data_size": 63488 00:14:50.887 } 00:14:50.887 ] 00:14:50.887 }' 00:14:50.887 11:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.887 11:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.452 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.452 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.453 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:51.453 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:51.711 [2024-07-25 11:56:37.736460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.711 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.969 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.969 "name": "Existed_Raid", 00:14:51.969 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:51.969 "strip_size_kb": 64, 00:14:51.969 "state": "configuring", 00:14:51.969 "raid_level": "concat", 00:14:51.969 "superblock": true, 00:14:51.969 "num_base_bdevs": 3, 00:14:51.969 "num_base_bdevs_discovered": 2, 00:14:51.969 "num_base_bdevs_operational": 3, 00:14:51.969 "base_bdevs_list": [ 00:14:51.969 { 00:14:51.969 "name": "BaseBdev1", 00:14:51.969 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:51.969 "is_configured": true, 00:14:51.969 "data_offset": 2048, 00:14:51.969 "data_size": 63488 00:14:51.969 }, 00:14:51.969 { 00:14:51.969 "name": null, 00:14:51.969 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:51.969 "is_configured": false, 00:14:51.969 "data_offset": 2048, 00:14:51.969 "data_size": 63488 00:14:51.969 }, 00:14:51.969 { 00:14:51.969 "name": "BaseBdev3", 00:14:51.969 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:51.969 "is_configured": true, 00:14:51.969 "data_offset": 2048, 00:14:51.969 "data_size": 63488 00:14:51.969 } 00:14:51.969 ] 00:14:51.969 }' 00:14:51.969 11:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.969 11:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.535 11:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:52.535 11:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.793 11:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:52.793 11:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:53.051 [2024-07-25 11:56:38.999811] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.051 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.052 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.052 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.052 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.309 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.309 "name": "Existed_Raid", 00:14:53.309 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:53.309 "strip_size_kb": 64, 00:14:53.309 "state": "configuring", 00:14:53.309 "raid_level": "concat", 00:14:53.309 "superblock": true, 00:14:53.309 "num_base_bdevs": 3, 00:14:53.309 "num_base_bdevs_discovered": 1, 00:14:53.309 "num_base_bdevs_operational": 3, 00:14:53.309 "base_bdevs_list": [ 00:14:53.309 { 00:14:53.309 "name": null, 00:14:53.309 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:53.309 "is_configured": false, 00:14:53.309 "data_offset": 2048, 00:14:53.309 "data_size": 63488 00:14:53.309 }, 00:14:53.309 { 00:14:53.309 "name": null, 00:14:53.309 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:53.309 "is_configured": false, 00:14:53.309 "data_offset": 2048, 00:14:53.309 "data_size": 63488 00:14:53.310 }, 00:14:53.310 { 00:14:53.310 "name": "BaseBdev3", 00:14:53.310 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:53.310 "is_configured": true, 00:14:53.310 "data_offset": 2048, 00:14:53.310 "data_size": 63488 00:14:53.310 } 00:14:53.310 ] 00:14:53.310 }' 00:14:53.310 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.310 11:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.875 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.875 11:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.133 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:54.133 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:54.392 [2024-07-25 11:56:40.269231] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.392 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.650 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.650 "name": "Existed_Raid", 00:14:54.650 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:54.650 "strip_size_kb": 64, 00:14:54.650 "state": "configuring", 00:14:54.650 "raid_level": "concat", 00:14:54.650 "superblock": true, 00:14:54.650 "num_base_bdevs": 3, 00:14:54.650 "num_base_bdevs_discovered": 2, 00:14:54.650 "num_base_bdevs_operational": 3, 00:14:54.650 "base_bdevs_list": [ 00:14:54.650 { 00:14:54.650 "name": null, 00:14:54.650 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:54.650 "is_configured": false, 00:14:54.650 "data_offset": 2048, 00:14:54.650 "data_size": 63488 00:14:54.650 }, 00:14:54.650 { 00:14:54.650 "name": "BaseBdev2", 00:14:54.650 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:54.650 "is_configured": true, 00:14:54.650 "data_offset": 2048, 00:14:54.650 "data_size": 63488 00:14:54.650 }, 00:14:54.650 { 00:14:54.650 "name": "BaseBdev3", 00:14:54.650 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:54.650 "is_configured": true, 00:14:54.650 "data_offset": 2048, 00:14:54.650 "data_size": 63488 00:14:54.650 } 00:14:54.650 ] 00:14:54.650 }' 00:14:54.650 11:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.650 11:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.217 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.217 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.217 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:55.217 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.217 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:55.475 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4dab6637-2a3f-4d92-8284-bbe78770b954 00:14:55.765 [2024-07-25 11:56:41.756364] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:55.765 [2024-07-25 11:56:41.756497] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x137aa80 00:14:55.765 [2024-07-25 11:56:41.756514] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:55.765 [2024-07-25 11:56:41.756668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152ea50 00:14:55.765 [2024-07-25 11:56:41.756769] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x137aa80 00:14:55.765 [2024-07-25 11:56:41.756779] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x137aa80 00:14:55.765 [2024-07-25 11:56:41.756858] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.765 NewBaseBdev 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.765 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.023 11:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:56.281 [ 00:14:56.281 { 00:14:56.281 "name": "NewBaseBdev", 00:14:56.281 "aliases": [ 00:14:56.281 "4dab6637-2a3f-4d92-8284-bbe78770b954" 00:14:56.281 ], 00:14:56.281 "product_name": "Malloc disk", 00:14:56.281 "block_size": 512, 00:14:56.282 "num_blocks": 65536, 00:14:56.282 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:56.282 "assigned_rate_limits": { 00:14:56.282 "rw_ios_per_sec": 0, 00:14:56.282 "rw_mbytes_per_sec": 0, 00:14:56.282 "r_mbytes_per_sec": 0, 00:14:56.282 "w_mbytes_per_sec": 0 00:14:56.282 }, 00:14:56.282 "claimed": true, 00:14:56.282 "claim_type": "exclusive_write", 00:14:56.282 "zoned": false, 00:14:56.282 "supported_io_types": { 00:14:56.282 "read": true, 00:14:56.282 "write": true, 00:14:56.282 "unmap": true, 00:14:56.282 "flush": true, 00:14:56.282 "reset": true, 00:14:56.282 "nvme_admin": false, 00:14:56.282 "nvme_io": false, 00:14:56.282 "nvme_io_md": false, 00:14:56.282 "write_zeroes": true, 00:14:56.282 "zcopy": true, 00:14:56.282 "get_zone_info": false, 00:14:56.282 "zone_management": false, 00:14:56.282 "zone_append": false, 00:14:56.282 "compare": false, 00:14:56.282 "compare_and_write": false, 00:14:56.282 "abort": true, 00:14:56.282 "seek_hole": false, 00:14:56.282 "seek_data": false, 00:14:56.282 "copy": true, 00:14:56.282 "nvme_iov_md": false 00:14:56.282 }, 00:14:56.282 "memory_domains": [ 00:14:56.282 { 00:14:56.282 "dma_device_id": "system", 00:14:56.282 "dma_device_type": 1 00:14:56.282 }, 00:14:56.282 { 00:14:56.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.282 "dma_device_type": 2 00:14:56.282 } 00:14:56.282 ], 00:14:56.282 "driver_specific": {} 00:14:56.282 } 00:14:56.282 ] 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.282 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.540 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.540 "name": "Existed_Raid", 00:14:56.540 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:56.540 "strip_size_kb": 64, 00:14:56.540 "state": "online", 00:14:56.540 "raid_level": "concat", 00:14:56.540 "superblock": true, 00:14:56.540 "num_base_bdevs": 3, 00:14:56.540 "num_base_bdevs_discovered": 3, 00:14:56.540 "num_base_bdevs_operational": 3, 00:14:56.540 "base_bdevs_list": [ 00:14:56.540 { 00:14:56.540 "name": "NewBaseBdev", 00:14:56.540 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:56.540 "is_configured": true, 00:14:56.540 "data_offset": 2048, 00:14:56.540 "data_size": 63488 00:14:56.540 }, 00:14:56.540 { 00:14:56.540 "name": "BaseBdev2", 00:14:56.540 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:56.540 "is_configured": true, 00:14:56.540 "data_offset": 2048, 00:14:56.540 "data_size": 63488 00:14:56.540 }, 00:14:56.540 { 00:14:56.540 "name": "BaseBdev3", 00:14:56.540 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:56.540 "is_configured": true, 00:14:56.540 "data_offset": 2048, 00:14:56.540 "data_size": 63488 00:14:56.540 } 00:14:56.540 ] 00:14:56.540 }' 00:14:56.540 11:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.540 11:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:57.107 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:57.367 [2024-07-25 11:56:43.236577] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.367 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:57.367 "name": "Existed_Raid", 00:14:57.367 "aliases": [ 00:14:57.367 "b368d7e3-1231-4b6b-8294-682c1dbd7d49" 00:14:57.367 ], 00:14:57.367 "product_name": "Raid Volume", 00:14:57.367 "block_size": 512, 00:14:57.367 "num_blocks": 190464, 00:14:57.367 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:57.367 "assigned_rate_limits": { 00:14:57.367 "rw_ios_per_sec": 0, 00:14:57.367 "rw_mbytes_per_sec": 0, 00:14:57.367 "r_mbytes_per_sec": 0, 00:14:57.367 "w_mbytes_per_sec": 0 00:14:57.367 }, 00:14:57.367 "claimed": false, 00:14:57.367 "zoned": false, 00:14:57.367 "supported_io_types": { 00:14:57.367 "read": true, 00:14:57.367 "write": true, 00:14:57.367 "unmap": true, 00:14:57.367 "flush": true, 00:14:57.367 "reset": true, 00:14:57.367 "nvme_admin": false, 00:14:57.367 "nvme_io": false, 00:14:57.367 "nvme_io_md": false, 00:14:57.367 "write_zeroes": true, 00:14:57.367 "zcopy": false, 00:14:57.367 "get_zone_info": false, 00:14:57.367 "zone_management": false, 00:14:57.367 "zone_append": false, 00:14:57.367 "compare": false, 00:14:57.367 "compare_and_write": false, 00:14:57.367 "abort": false, 00:14:57.367 "seek_hole": false, 00:14:57.367 "seek_data": false, 00:14:57.367 "copy": false, 00:14:57.367 "nvme_iov_md": false 00:14:57.367 }, 00:14:57.367 "memory_domains": [ 00:14:57.367 { 00:14:57.367 "dma_device_id": "system", 00:14:57.367 "dma_device_type": 1 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.367 "dma_device_type": 2 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "dma_device_id": "system", 00:14:57.367 "dma_device_type": 1 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.367 "dma_device_type": 2 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "dma_device_id": "system", 00:14:57.367 "dma_device_type": 1 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.367 "dma_device_type": 2 00:14:57.367 } 00:14:57.367 ], 00:14:57.367 "driver_specific": { 00:14:57.367 "raid": { 00:14:57.367 "uuid": "b368d7e3-1231-4b6b-8294-682c1dbd7d49", 00:14:57.367 "strip_size_kb": 64, 00:14:57.367 "state": "online", 00:14:57.368 "raid_level": "concat", 00:14:57.368 "superblock": true, 00:14:57.368 "num_base_bdevs": 3, 00:14:57.368 "num_base_bdevs_discovered": 3, 00:14:57.368 "num_base_bdevs_operational": 3, 00:14:57.368 "base_bdevs_list": [ 00:14:57.368 { 00:14:57.368 "name": "NewBaseBdev", 00:14:57.368 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:57.368 "is_configured": true, 00:14:57.368 "data_offset": 2048, 00:14:57.368 "data_size": 63488 00:14:57.368 }, 00:14:57.368 { 00:14:57.368 "name": "BaseBdev2", 00:14:57.368 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:57.368 "is_configured": true, 00:14:57.368 "data_offset": 2048, 00:14:57.368 "data_size": 63488 00:14:57.368 }, 00:14:57.368 { 00:14:57.368 "name": "BaseBdev3", 00:14:57.368 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:57.368 "is_configured": true, 00:14:57.368 "data_offset": 2048, 00:14:57.368 "data_size": 63488 00:14:57.368 } 00:14:57.368 ] 00:14:57.368 } 00:14:57.368 } 00:14:57.368 }' 00:14:57.368 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.368 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:57.368 BaseBdev2 00:14:57.368 BaseBdev3' 00:14:57.368 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.368 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:57.368 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.626 "name": "NewBaseBdev", 00:14:57.626 "aliases": [ 00:14:57.626 "4dab6637-2a3f-4d92-8284-bbe78770b954" 00:14:57.626 ], 00:14:57.626 "product_name": "Malloc disk", 00:14:57.626 "block_size": 512, 00:14:57.626 "num_blocks": 65536, 00:14:57.626 "uuid": "4dab6637-2a3f-4d92-8284-bbe78770b954", 00:14:57.626 "assigned_rate_limits": { 00:14:57.626 "rw_ios_per_sec": 0, 00:14:57.626 "rw_mbytes_per_sec": 0, 00:14:57.626 "r_mbytes_per_sec": 0, 00:14:57.626 "w_mbytes_per_sec": 0 00:14:57.626 }, 00:14:57.626 "claimed": true, 00:14:57.626 "claim_type": "exclusive_write", 00:14:57.626 "zoned": false, 00:14:57.626 "supported_io_types": { 00:14:57.626 "read": true, 00:14:57.626 "write": true, 00:14:57.626 "unmap": true, 00:14:57.626 "flush": true, 00:14:57.626 "reset": true, 00:14:57.626 "nvme_admin": false, 00:14:57.626 "nvme_io": false, 00:14:57.626 "nvme_io_md": false, 00:14:57.626 "write_zeroes": true, 00:14:57.626 "zcopy": true, 00:14:57.626 "get_zone_info": false, 00:14:57.626 "zone_management": false, 00:14:57.626 "zone_append": false, 00:14:57.626 "compare": false, 00:14:57.626 "compare_and_write": false, 00:14:57.626 "abort": true, 00:14:57.626 "seek_hole": false, 00:14:57.626 "seek_data": false, 00:14:57.626 "copy": true, 00:14:57.626 "nvme_iov_md": false 00:14:57.626 }, 00:14:57.626 "memory_domains": [ 00:14:57.626 { 00:14:57.626 "dma_device_id": "system", 00:14:57.626 "dma_device_type": 1 00:14:57.626 }, 00:14:57.626 { 00:14:57.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.626 "dma_device_type": 2 00:14:57.626 } 00:14:57.626 ], 00:14:57.626 "driver_specific": {} 00:14:57.626 }' 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.626 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:57.885 11:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.143 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.143 "name": "BaseBdev2", 00:14:58.143 "aliases": [ 00:14:58.143 "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f" 00:14:58.143 ], 00:14:58.143 "product_name": "Malloc disk", 00:14:58.143 "block_size": 512, 00:14:58.143 "num_blocks": 65536, 00:14:58.143 "uuid": "ed322bb7-b79f-43f3-b842-8a88fc0a7f2f", 00:14:58.143 "assigned_rate_limits": { 00:14:58.143 "rw_ios_per_sec": 0, 00:14:58.143 "rw_mbytes_per_sec": 0, 00:14:58.143 "r_mbytes_per_sec": 0, 00:14:58.143 "w_mbytes_per_sec": 0 00:14:58.143 }, 00:14:58.143 "claimed": true, 00:14:58.143 "claim_type": "exclusive_write", 00:14:58.143 "zoned": false, 00:14:58.143 "supported_io_types": { 00:14:58.143 "read": true, 00:14:58.143 "write": true, 00:14:58.143 "unmap": true, 00:14:58.143 "flush": true, 00:14:58.143 "reset": true, 00:14:58.143 "nvme_admin": false, 00:14:58.143 "nvme_io": false, 00:14:58.143 "nvme_io_md": false, 00:14:58.143 "write_zeroes": true, 00:14:58.143 "zcopy": true, 00:14:58.143 "get_zone_info": false, 00:14:58.143 "zone_management": false, 00:14:58.143 "zone_append": false, 00:14:58.143 "compare": false, 00:14:58.143 "compare_and_write": false, 00:14:58.143 "abort": true, 00:14:58.143 "seek_hole": false, 00:14:58.143 "seek_data": false, 00:14:58.143 "copy": true, 00:14:58.143 "nvme_iov_md": false 00:14:58.143 }, 00:14:58.143 "memory_domains": [ 00:14:58.143 { 00:14:58.143 "dma_device_id": "system", 00:14:58.143 "dma_device_type": 1 00:14:58.143 }, 00:14:58.143 { 00:14:58.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.144 "dma_device_type": 2 00:14:58.144 } 00:14:58.144 ], 00:14:58.144 "driver_specific": {} 00:14:58.144 }' 00:14:58.144 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.144 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.144 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.144 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.144 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.402 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.402 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.402 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.402 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.402 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.402 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.403 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.403 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:58.403 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:58.403 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.661 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.661 "name": "BaseBdev3", 00:14:58.661 "aliases": [ 00:14:58.661 "7cf35ec0-edfe-4857-a354-02f946c287b4" 00:14:58.661 ], 00:14:58.661 "product_name": "Malloc disk", 00:14:58.661 "block_size": 512, 00:14:58.661 "num_blocks": 65536, 00:14:58.661 "uuid": "7cf35ec0-edfe-4857-a354-02f946c287b4", 00:14:58.661 "assigned_rate_limits": { 00:14:58.661 "rw_ios_per_sec": 0, 00:14:58.661 "rw_mbytes_per_sec": 0, 00:14:58.661 "r_mbytes_per_sec": 0, 00:14:58.661 "w_mbytes_per_sec": 0 00:14:58.661 }, 00:14:58.661 "claimed": true, 00:14:58.661 "claim_type": "exclusive_write", 00:14:58.661 "zoned": false, 00:14:58.661 "supported_io_types": { 00:14:58.661 "read": true, 00:14:58.661 "write": true, 00:14:58.661 "unmap": true, 00:14:58.661 "flush": true, 00:14:58.661 "reset": true, 00:14:58.661 "nvme_admin": false, 00:14:58.661 "nvme_io": false, 00:14:58.661 "nvme_io_md": false, 00:14:58.661 "write_zeroes": true, 00:14:58.661 "zcopy": true, 00:14:58.661 "get_zone_info": false, 00:14:58.661 "zone_management": false, 00:14:58.661 "zone_append": false, 00:14:58.661 "compare": false, 00:14:58.661 "compare_and_write": false, 00:14:58.661 "abort": true, 00:14:58.661 "seek_hole": false, 00:14:58.661 "seek_data": false, 00:14:58.661 "copy": true, 00:14:58.661 "nvme_iov_md": false 00:14:58.661 }, 00:14:58.661 "memory_domains": [ 00:14:58.661 { 00:14:58.661 "dma_device_id": "system", 00:14:58.661 "dma_device_type": 1 00:14:58.661 }, 00:14:58.661 { 00:14:58.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.661 "dma_device_type": 2 00:14:58.661 } 00:14:58.661 ], 00:14:58.661 "driver_specific": {} 00:14:58.661 }' 00:14:58.661 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.661 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.661 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.661 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.920 11:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.920 11:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.920 11:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:59.178 [2024-07-25 11:56:45.221541] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.178 [2024-07-25 11:56:45.221564] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.178 [2024-07-25 11:56:45.221611] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.178 [2024-07-25 11:56:45.221656] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.179 [2024-07-25 11:56:45.221667] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x137aa80 name Existed_Raid, state offline 00:14:59.179 11:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4138331 00:14:59.179 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4138331 ']' 00:14:59.179 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4138331 00:14:59.179 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:59.179 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.179 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4138331 00:14:59.437 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.437 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.437 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4138331' 00:14:59.437 killing process with pid 4138331 00:14:59.437 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4138331 00:14:59.437 [2024-07-25 11:56:45.298498] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.437 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4138331 00:14:59.437 [2024-07-25 11:56:45.322718] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.437 11:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:59.437 00:14:59.438 real 0m27.322s 00:14:59.438 user 0m50.132s 00:14:59.438 sys 0m4.891s 00:14:59.438 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.438 11:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.438 ************************************ 00:14:59.438 END TEST raid_state_function_test_sb 00:14:59.438 ************************************ 00:14:59.696 11:56:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:59.696 11:56:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:59.696 11:56:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.696 11:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.696 ************************************ 00:14:59.696 START TEST raid_superblock_test 00:14:59.696 ************************************ 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4143623 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4143623 /var/tmp/spdk-raid.sock 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4143623 ']' 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.696 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.697 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.697 11:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.697 [2024-07-25 11:56:45.661994] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:59.697 [2024-07-25 11:56:45.662052] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143623 ] 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:59.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.697 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:59.697 [2024-07-25 11:56:45.794541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.955 [2024-07-25 11:56:45.878291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.955 [2024-07-25 11:56:45.941371] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.955 [2024-07-25 11:56:45.941412] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.522 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:00.781 malloc1 00:15:00.781 11:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.040 [2024-07-25 11:56:47.007513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.040 [2024-07-25 11:56:47.007560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.040 [2024-07-25 11:56:47.007577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd6c2f0 00:15:01.040 [2024-07-25 11:56:47.007588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.040 [2024-07-25 11:56:47.009038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.040 [2024-07-25 11:56:47.009065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.040 pt1 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.040 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:01.298 malloc2 00:15:01.298 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.556 [2024-07-25 11:56:47.472986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.556 [2024-07-25 11:56:47.473028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.556 [2024-07-25 11:56:47.473043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd6d6d0 00:15:01.556 [2024-07-25 11:56:47.473055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.556 [2024-07-25 11:56:47.474422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.556 [2024-07-25 11:56:47.474450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.556 pt2 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.556 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.557 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.557 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:01.815 malloc3 00:15:01.815 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.074 [2024-07-25 11:56:47.934282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.074 [2024-07-25 11:56:47.934323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.074 [2024-07-25 11:56:47.934339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf066b0 00:15:02.074 [2024-07-25 11:56:47.934350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.074 [2024-07-25 11:56:47.935642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.074 [2024-07-25 11:56:47.935668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.074 pt3 00:15:02.074 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:02.074 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:02.074 11:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:02.074 [2024-07-25 11:56:48.158891] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.074 [2024-07-25 11:56:48.159973] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.074 [2024-07-25 11:56:48.160024] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.074 [2024-07-25 11:56:48.160176] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xf06cb0 00:15:02.074 [2024-07-25 11:56:48.160186] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:02.074 [2024-07-25 11:56:48.160352] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf05270 00:15:02.074 [2024-07-25 11:56:48.160480] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf06cb0 00:15:02.074 [2024-07-25 11:56:48.160489] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf06cb0 00:15:02.074 [2024-07-25 11:56:48.160571] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.074 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.333 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.333 "name": "raid_bdev1", 00:15:02.333 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:02.333 "strip_size_kb": 64, 00:15:02.333 "state": "online", 00:15:02.333 "raid_level": "concat", 00:15:02.333 "superblock": true, 00:15:02.333 "num_base_bdevs": 3, 00:15:02.333 "num_base_bdevs_discovered": 3, 00:15:02.333 "num_base_bdevs_operational": 3, 00:15:02.333 "base_bdevs_list": [ 00:15:02.333 { 00:15:02.333 "name": "pt1", 00:15:02.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.333 "is_configured": true, 00:15:02.333 "data_offset": 2048, 00:15:02.333 "data_size": 63488 00:15:02.333 }, 00:15:02.333 { 00:15:02.333 "name": "pt2", 00:15:02.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.333 "is_configured": true, 00:15:02.333 "data_offset": 2048, 00:15:02.333 "data_size": 63488 00:15:02.333 }, 00:15:02.333 { 00:15:02.333 "name": "pt3", 00:15:02.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.333 "is_configured": true, 00:15:02.333 "data_offset": 2048, 00:15:02.333 "data_size": 63488 00:15:02.333 } 00:15:02.333 ] 00:15:02.333 }' 00:15:02.333 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.333 11:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.899 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.900 11:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:03.159 [2024-07-25 11:56:49.185826] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.159 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:03.159 "name": "raid_bdev1", 00:15:03.159 "aliases": [ 00:15:03.159 "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8" 00:15:03.159 ], 00:15:03.159 "product_name": "Raid Volume", 00:15:03.159 "block_size": 512, 00:15:03.159 "num_blocks": 190464, 00:15:03.159 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:03.159 "assigned_rate_limits": { 00:15:03.159 "rw_ios_per_sec": 0, 00:15:03.159 "rw_mbytes_per_sec": 0, 00:15:03.159 "r_mbytes_per_sec": 0, 00:15:03.159 "w_mbytes_per_sec": 0 00:15:03.159 }, 00:15:03.159 "claimed": false, 00:15:03.159 "zoned": false, 00:15:03.159 "supported_io_types": { 00:15:03.159 "read": true, 00:15:03.159 "write": true, 00:15:03.159 "unmap": true, 00:15:03.159 "flush": true, 00:15:03.159 "reset": true, 00:15:03.159 "nvme_admin": false, 00:15:03.159 "nvme_io": false, 00:15:03.159 "nvme_io_md": false, 00:15:03.159 "write_zeroes": true, 00:15:03.159 "zcopy": false, 00:15:03.159 "get_zone_info": false, 00:15:03.159 "zone_management": false, 00:15:03.159 "zone_append": false, 00:15:03.159 "compare": false, 00:15:03.159 "compare_and_write": false, 00:15:03.159 "abort": false, 00:15:03.159 "seek_hole": false, 00:15:03.159 "seek_data": false, 00:15:03.159 "copy": false, 00:15:03.159 "nvme_iov_md": false 00:15:03.159 }, 00:15:03.159 "memory_domains": [ 00:15:03.159 { 00:15:03.159 "dma_device_id": "system", 00:15:03.159 "dma_device_type": 1 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.159 "dma_device_type": 2 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "dma_device_id": "system", 00:15:03.159 "dma_device_type": 1 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.159 "dma_device_type": 2 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "dma_device_id": "system", 00:15:03.159 "dma_device_type": 1 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.159 "dma_device_type": 2 00:15:03.159 } 00:15:03.159 ], 00:15:03.159 "driver_specific": { 00:15:03.159 "raid": { 00:15:03.159 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:03.159 "strip_size_kb": 64, 00:15:03.159 "state": "online", 00:15:03.159 "raid_level": "concat", 00:15:03.159 "superblock": true, 00:15:03.159 "num_base_bdevs": 3, 00:15:03.159 "num_base_bdevs_discovered": 3, 00:15:03.159 "num_base_bdevs_operational": 3, 00:15:03.159 "base_bdevs_list": [ 00:15:03.159 { 00:15:03.159 "name": "pt1", 00:15:03.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.159 "is_configured": true, 00:15:03.159 "data_offset": 2048, 00:15:03.159 "data_size": 63488 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "name": "pt2", 00:15:03.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.159 "is_configured": true, 00:15:03.159 "data_offset": 2048, 00:15:03.159 "data_size": 63488 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "name": "pt3", 00:15:03.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.159 "is_configured": true, 00:15:03.159 "data_offset": 2048, 00:15:03.159 "data_size": 63488 00:15:03.159 } 00:15:03.159 ] 00:15:03.159 } 00:15:03.159 } 00:15:03.159 }' 00:15:03.159 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.159 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:03.159 pt2 00:15:03.159 pt3' 00:15:03.159 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.159 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:03.159 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.418 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.418 "name": "pt1", 00:15:03.418 "aliases": [ 00:15:03.418 "00000000-0000-0000-0000-000000000001" 00:15:03.418 ], 00:15:03.418 "product_name": "passthru", 00:15:03.418 "block_size": 512, 00:15:03.418 "num_blocks": 65536, 00:15:03.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.418 "assigned_rate_limits": { 00:15:03.418 "rw_ios_per_sec": 0, 00:15:03.418 "rw_mbytes_per_sec": 0, 00:15:03.418 "r_mbytes_per_sec": 0, 00:15:03.418 "w_mbytes_per_sec": 0 00:15:03.418 }, 00:15:03.418 "claimed": true, 00:15:03.418 "claim_type": "exclusive_write", 00:15:03.418 "zoned": false, 00:15:03.418 "supported_io_types": { 00:15:03.418 "read": true, 00:15:03.418 "write": true, 00:15:03.418 "unmap": true, 00:15:03.418 "flush": true, 00:15:03.418 "reset": true, 00:15:03.418 "nvme_admin": false, 00:15:03.418 "nvme_io": false, 00:15:03.418 "nvme_io_md": false, 00:15:03.418 "write_zeroes": true, 00:15:03.418 "zcopy": true, 00:15:03.418 "get_zone_info": false, 00:15:03.418 "zone_management": false, 00:15:03.418 "zone_append": false, 00:15:03.418 "compare": false, 00:15:03.418 "compare_and_write": false, 00:15:03.418 "abort": true, 00:15:03.418 "seek_hole": false, 00:15:03.418 "seek_data": false, 00:15:03.418 "copy": true, 00:15:03.418 "nvme_iov_md": false 00:15:03.418 }, 00:15:03.418 "memory_domains": [ 00:15:03.418 { 00:15:03.418 "dma_device_id": "system", 00:15:03.418 "dma_device_type": 1 00:15:03.418 }, 00:15:03.418 { 00:15:03.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.418 "dma_device_type": 2 00:15:03.418 } 00:15:03.418 ], 00:15:03.418 "driver_specific": { 00:15:03.418 "passthru": { 00:15:03.418 "name": "pt1", 00:15:03.418 "base_bdev_name": "malloc1" 00:15:03.418 } 00:15:03.418 } 00:15:03.418 }' 00:15:03.418 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.418 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:03.677 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.935 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.935 "name": "pt2", 00:15:03.935 "aliases": [ 00:15:03.935 "00000000-0000-0000-0000-000000000002" 00:15:03.935 ], 00:15:03.935 "product_name": "passthru", 00:15:03.935 "block_size": 512, 00:15:03.935 "num_blocks": 65536, 00:15:03.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.935 "assigned_rate_limits": { 00:15:03.935 "rw_ios_per_sec": 0, 00:15:03.935 "rw_mbytes_per_sec": 0, 00:15:03.935 "r_mbytes_per_sec": 0, 00:15:03.935 "w_mbytes_per_sec": 0 00:15:03.935 }, 00:15:03.935 "claimed": true, 00:15:03.935 "claim_type": "exclusive_write", 00:15:03.935 "zoned": false, 00:15:03.935 "supported_io_types": { 00:15:03.935 "read": true, 00:15:03.935 "write": true, 00:15:03.935 "unmap": true, 00:15:03.935 "flush": true, 00:15:03.935 "reset": true, 00:15:03.935 "nvme_admin": false, 00:15:03.935 "nvme_io": false, 00:15:03.935 "nvme_io_md": false, 00:15:03.935 "write_zeroes": true, 00:15:03.935 "zcopy": true, 00:15:03.935 "get_zone_info": false, 00:15:03.935 "zone_management": false, 00:15:03.935 "zone_append": false, 00:15:03.935 "compare": false, 00:15:03.935 "compare_and_write": false, 00:15:03.935 "abort": true, 00:15:03.935 "seek_hole": false, 00:15:03.935 "seek_data": false, 00:15:03.935 "copy": true, 00:15:03.935 "nvme_iov_md": false 00:15:03.935 }, 00:15:03.935 "memory_domains": [ 00:15:03.935 { 00:15:03.935 "dma_device_id": "system", 00:15:03.935 "dma_device_type": 1 00:15:03.935 }, 00:15:03.935 { 00:15:03.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.935 "dma_device_type": 2 00:15:03.936 } 00:15:03.936 ], 00:15:03.936 "driver_specific": { 00:15:03.936 "passthru": { 00:15:03.936 "name": "pt2", 00:15:03.936 "base_bdev_name": "malloc2" 00:15:03.936 } 00:15:03.936 } 00:15:03.936 }' 00:15:03.936 11:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.936 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.193 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.452 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.452 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.452 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:04.452 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.452 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.452 "name": "pt3", 00:15:04.452 "aliases": [ 00:15:04.452 "00000000-0000-0000-0000-000000000003" 00:15:04.452 ], 00:15:04.452 "product_name": "passthru", 00:15:04.452 "block_size": 512, 00:15:04.452 "num_blocks": 65536, 00:15:04.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.452 "assigned_rate_limits": { 00:15:04.452 "rw_ios_per_sec": 0, 00:15:04.452 "rw_mbytes_per_sec": 0, 00:15:04.452 "r_mbytes_per_sec": 0, 00:15:04.452 "w_mbytes_per_sec": 0 00:15:04.452 }, 00:15:04.452 "claimed": true, 00:15:04.452 "claim_type": "exclusive_write", 00:15:04.452 "zoned": false, 00:15:04.452 "supported_io_types": { 00:15:04.452 "read": true, 00:15:04.452 "write": true, 00:15:04.452 "unmap": true, 00:15:04.452 "flush": true, 00:15:04.452 "reset": true, 00:15:04.452 "nvme_admin": false, 00:15:04.452 "nvme_io": false, 00:15:04.452 "nvme_io_md": false, 00:15:04.452 "write_zeroes": true, 00:15:04.452 "zcopy": true, 00:15:04.452 "get_zone_info": false, 00:15:04.452 "zone_management": false, 00:15:04.452 "zone_append": false, 00:15:04.452 "compare": false, 00:15:04.452 "compare_and_write": false, 00:15:04.452 "abort": true, 00:15:04.452 "seek_hole": false, 00:15:04.452 "seek_data": false, 00:15:04.452 "copy": true, 00:15:04.452 "nvme_iov_md": false 00:15:04.452 }, 00:15:04.452 "memory_domains": [ 00:15:04.452 { 00:15:04.452 "dma_device_id": "system", 00:15:04.452 "dma_device_type": 1 00:15:04.452 }, 00:15:04.452 { 00:15:04.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.452 "dma_device_type": 2 00:15:04.452 } 00:15:04.452 ], 00:15:04.452 "driver_specific": { 00:15:04.452 "passthru": { 00:15:04.452 "name": "pt3", 00:15:04.452 "base_bdev_name": "malloc3" 00:15:04.452 } 00:15:04.452 } 00:15:04.452 }' 00:15:04.452 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.710 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.967 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.967 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.967 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.967 11:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:05.225 [2024-07-25 11:56:51.102869] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.225 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4e6c70da-33a2-4065-b2b5-54cf4b7db1d8 00:15:05.225 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4e6c70da-33a2-4065-b2b5-54cf4b7db1d8 ']' 00:15:05.225 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:05.225 [2024-07-25 11:56:51.331197] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.225 [2024-07-25 11:56:51.331213] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.225 [2024-07-25 11:56:51.331258] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.225 [2024-07-25 11:56:51.331309] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.225 [2024-07-25 11:56:51.331320] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf06cb0 name raid_bdev1, state offline 00:15:05.483 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.483 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:05.483 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:05.483 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:05.483 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.483 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:05.742 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.742 11:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:06.001 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.001 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:06.260 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:06.260 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:06.519 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:06.779 [2024-07-25 11:56:52.711004] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.779 [2024-07-25 11:56:52.712268] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.779 [2024-07-25 11:56:52.712308] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:06.779 [2024-07-25 11:56:52.712351] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:06.779 [2024-07-25 11:56:52.712389] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:06.779 [2024-07-25 11:56:52.712410] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:06.779 [2024-07-25 11:56:52.712426] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.779 [2024-07-25 11:56:52.712435] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf0fd50 name raid_bdev1, state configuring 00:15:06.779 request: 00:15:06.779 { 00:15:06.779 "name": "raid_bdev1", 00:15:06.779 "raid_level": "concat", 00:15:06.779 "base_bdevs": [ 00:15:06.779 "malloc1", 00:15:06.779 "malloc2", 00:15:06.779 "malloc3" 00:15:06.779 ], 00:15:06.779 "strip_size_kb": 64, 00:15:06.779 "superblock": false, 00:15:06.779 "method": "bdev_raid_create", 00:15:06.779 "req_id": 1 00:15:06.779 } 00:15:06.779 Got JSON-RPC error response 00:15:06.779 response: 00:15:06.779 { 00:15:06.779 "code": -17, 00:15:06.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.779 } 00:15:06.779 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:06.779 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.779 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.779 11:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.779 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.779 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:07.038 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:07.038 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:07.038 11:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.297 [2024-07-25 11:56:53.176163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.297 [2024-07-25 11:56:53.176197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.297 [2024-07-25 11:56:53.176212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf03d00 00:15:07.297 [2024-07-25 11:56:53.176224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.297 [2024-07-25 11:56:53.177586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.297 [2024-07-25 11:56:53.177613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.297 [2024-07-25 11:56:53.177668] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.297 [2024-07-25 11:56:53.177692] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.297 pt1 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.297 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.556 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.556 "name": "raid_bdev1", 00:15:07.556 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:07.556 "strip_size_kb": 64, 00:15:07.556 "state": "configuring", 00:15:07.556 "raid_level": "concat", 00:15:07.556 "superblock": true, 00:15:07.556 "num_base_bdevs": 3, 00:15:07.556 "num_base_bdevs_discovered": 1, 00:15:07.556 "num_base_bdevs_operational": 3, 00:15:07.556 "base_bdevs_list": [ 00:15:07.556 { 00:15:07.556 "name": "pt1", 00:15:07.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.556 "is_configured": true, 00:15:07.556 "data_offset": 2048, 00:15:07.556 "data_size": 63488 00:15:07.556 }, 00:15:07.556 { 00:15:07.556 "name": null, 00:15:07.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.556 "is_configured": false, 00:15:07.556 "data_offset": 2048, 00:15:07.556 "data_size": 63488 00:15:07.556 }, 00:15:07.556 { 00:15:07.556 "name": null, 00:15:07.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.556 "is_configured": false, 00:15:07.556 "data_offset": 2048, 00:15:07.556 "data_size": 63488 00:15:07.556 } 00:15:07.556 ] 00:15:07.556 }' 00:15:07.556 11:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.556 11:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.124 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:15:08.124 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.124 [2024-07-25 11:56:54.206881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.124 [2024-07-25 11:56:54.206927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.124 [2024-07-25 11:56:54.206946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf04370 00:15:08.124 [2024-07-25 11:56:54.206958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.124 [2024-07-25 11:56:54.207287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.124 [2024-07-25 11:56:54.207303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.124 [2024-07-25 11:56:54.207362] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:08.124 [2024-07-25 11:56:54.207379] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.124 pt2 00:15:08.124 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:08.383 [2024-07-25 11:56:54.431489] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.383 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.642 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.642 "name": "raid_bdev1", 00:15:08.642 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:08.642 "strip_size_kb": 64, 00:15:08.642 "state": "configuring", 00:15:08.642 "raid_level": "concat", 00:15:08.642 "superblock": true, 00:15:08.642 "num_base_bdevs": 3, 00:15:08.642 "num_base_bdevs_discovered": 1, 00:15:08.643 "num_base_bdevs_operational": 3, 00:15:08.643 "base_bdevs_list": [ 00:15:08.643 { 00:15:08.643 "name": "pt1", 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.643 "is_configured": true, 00:15:08.643 "data_offset": 2048, 00:15:08.643 "data_size": 63488 00:15:08.643 }, 00:15:08.643 { 00:15:08.643 "name": null, 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.643 "is_configured": false, 00:15:08.643 "data_offset": 2048, 00:15:08.643 "data_size": 63488 00:15:08.643 }, 00:15:08.643 { 00:15:08.643 "name": null, 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.643 "is_configured": false, 00:15:08.643 "data_offset": 2048, 00:15:08.643 "data_size": 63488 00:15:08.643 } 00:15:08.643 ] 00:15:08.643 }' 00:15:08.643 11:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.643 11:56:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.210 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:09.210 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.210 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.472 [2024-07-25 11:56:55.478253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.472 [2024-07-25 11:56:55.478303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.472 [2024-07-25 11:56:55.478320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd64390 00:15:09.472 [2024-07-25 11:56:55.478332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.472 [2024-07-25 11:56:55.478664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.472 [2024-07-25 11:56:55.478681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.472 [2024-07-25 11:56:55.478739] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.472 [2024-07-25 11:56:55.478756] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.472 pt2 00:15:09.472 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:09.472 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.472 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.767 [2024-07-25 11:56:55.706857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.767 [2024-07-25 11:56:55.706899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.767 [2024-07-25 11:56:55.706915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd63e20 00:15:09.767 [2024-07-25 11:56:55.706932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.767 [2024-07-25 11:56:55.707241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.767 [2024-07-25 11:56:55.707257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.767 [2024-07-25 11:56:55.707311] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:09.767 [2024-07-25 11:56:55.707329] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.767 [2024-07-25 11:56:55.707427] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xf04de0 00:15:09.767 [2024-07-25 11:56:55.707436] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.767 [2024-07-25 11:56:55.707589] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf1e9c0 00:15:09.767 [2024-07-25 11:56:55.707700] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf04de0 00:15:09.767 [2024-07-25 11:56:55.707709] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf04de0 00:15:09.767 [2024-07-25 11:56:55.707794] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.767 pt3 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.767 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.026 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.026 "name": "raid_bdev1", 00:15:10.026 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:10.026 "strip_size_kb": 64, 00:15:10.026 "state": "online", 00:15:10.026 "raid_level": "concat", 00:15:10.026 "superblock": true, 00:15:10.026 "num_base_bdevs": 3, 00:15:10.026 "num_base_bdevs_discovered": 3, 00:15:10.026 "num_base_bdevs_operational": 3, 00:15:10.026 "base_bdevs_list": [ 00:15:10.026 { 00:15:10.026 "name": "pt1", 00:15:10.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.026 "is_configured": true, 00:15:10.026 "data_offset": 2048, 00:15:10.026 "data_size": 63488 00:15:10.026 }, 00:15:10.026 { 00:15:10.026 "name": "pt2", 00:15:10.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.026 "is_configured": true, 00:15:10.026 "data_offset": 2048, 00:15:10.026 "data_size": 63488 00:15:10.026 }, 00:15:10.026 { 00:15:10.026 "name": "pt3", 00:15:10.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.026 "is_configured": true, 00:15:10.026 "data_offset": 2048, 00:15:10.026 "data_size": 63488 00:15:10.026 } 00:15:10.026 ] 00:15:10.026 }' 00:15:10.026 11:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.026 11:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.595 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:10.854 [2024-07-25 11:56:56.729830] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.854 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:10.854 "name": "raid_bdev1", 00:15:10.854 "aliases": [ 00:15:10.854 "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8" 00:15:10.854 ], 00:15:10.854 "product_name": "Raid Volume", 00:15:10.854 "block_size": 512, 00:15:10.854 "num_blocks": 190464, 00:15:10.854 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:10.854 "assigned_rate_limits": { 00:15:10.854 "rw_ios_per_sec": 0, 00:15:10.854 "rw_mbytes_per_sec": 0, 00:15:10.854 "r_mbytes_per_sec": 0, 00:15:10.854 "w_mbytes_per_sec": 0 00:15:10.854 }, 00:15:10.854 "claimed": false, 00:15:10.854 "zoned": false, 00:15:10.854 "supported_io_types": { 00:15:10.854 "read": true, 00:15:10.854 "write": true, 00:15:10.854 "unmap": true, 00:15:10.854 "flush": true, 00:15:10.854 "reset": true, 00:15:10.854 "nvme_admin": false, 00:15:10.854 "nvme_io": false, 00:15:10.854 "nvme_io_md": false, 00:15:10.854 "write_zeroes": true, 00:15:10.854 "zcopy": false, 00:15:10.854 "get_zone_info": false, 00:15:10.854 "zone_management": false, 00:15:10.854 "zone_append": false, 00:15:10.854 "compare": false, 00:15:10.854 "compare_and_write": false, 00:15:10.854 "abort": false, 00:15:10.854 "seek_hole": false, 00:15:10.854 "seek_data": false, 00:15:10.854 "copy": false, 00:15:10.854 "nvme_iov_md": false 00:15:10.854 }, 00:15:10.854 "memory_domains": [ 00:15:10.854 { 00:15:10.854 "dma_device_id": "system", 00:15:10.854 "dma_device_type": 1 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.854 "dma_device_type": 2 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "dma_device_id": "system", 00:15:10.854 "dma_device_type": 1 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.854 "dma_device_type": 2 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "dma_device_id": "system", 00:15:10.854 "dma_device_type": 1 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.854 "dma_device_type": 2 00:15:10.854 } 00:15:10.854 ], 00:15:10.854 "driver_specific": { 00:15:10.854 "raid": { 00:15:10.854 "uuid": "4e6c70da-33a2-4065-b2b5-54cf4b7db1d8", 00:15:10.854 "strip_size_kb": 64, 00:15:10.854 "state": "online", 00:15:10.854 "raid_level": "concat", 00:15:10.854 "superblock": true, 00:15:10.854 "num_base_bdevs": 3, 00:15:10.854 "num_base_bdevs_discovered": 3, 00:15:10.854 "num_base_bdevs_operational": 3, 00:15:10.854 "base_bdevs_list": [ 00:15:10.854 { 00:15:10.854 "name": "pt1", 00:15:10.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.854 "is_configured": true, 00:15:10.854 "data_offset": 2048, 00:15:10.854 "data_size": 63488 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "name": "pt2", 00:15:10.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.854 "is_configured": true, 00:15:10.854 "data_offset": 2048, 00:15:10.854 "data_size": 63488 00:15:10.854 }, 00:15:10.854 { 00:15:10.854 "name": "pt3", 00:15:10.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.854 "is_configured": true, 00:15:10.854 "data_offset": 2048, 00:15:10.854 "data_size": 63488 00:15:10.854 } 00:15:10.854 ] 00:15:10.854 } 00:15:10.854 } 00:15:10.854 }' 00:15:10.854 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.854 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:10.854 pt2 00:15:10.854 pt3' 00:15:10.854 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.854 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:10.854 11:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.114 "name": "pt1", 00:15:11.114 "aliases": [ 00:15:11.114 "00000000-0000-0000-0000-000000000001" 00:15:11.114 ], 00:15:11.114 "product_name": "passthru", 00:15:11.114 "block_size": 512, 00:15:11.114 "num_blocks": 65536, 00:15:11.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.114 "assigned_rate_limits": { 00:15:11.114 "rw_ios_per_sec": 0, 00:15:11.114 "rw_mbytes_per_sec": 0, 00:15:11.114 "r_mbytes_per_sec": 0, 00:15:11.114 "w_mbytes_per_sec": 0 00:15:11.114 }, 00:15:11.114 "claimed": true, 00:15:11.114 "claim_type": "exclusive_write", 00:15:11.114 "zoned": false, 00:15:11.114 "supported_io_types": { 00:15:11.114 "read": true, 00:15:11.114 "write": true, 00:15:11.114 "unmap": true, 00:15:11.114 "flush": true, 00:15:11.114 "reset": true, 00:15:11.114 "nvme_admin": false, 00:15:11.114 "nvme_io": false, 00:15:11.114 "nvme_io_md": false, 00:15:11.114 "write_zeroes": true, 00:15:11.114 "zcopy": true, 00:15:11.114 "get_zone_info": false, 00:15:11.114 "zone_management": false, 00:15:11.114 "zone_append": false, 00:15:11.114 "compare": false, 00:15:11.114 "compare_and_write": false, 00:15:11.114 "abort": true, 00:15:11.114 "seek_hole": false, 00:15:11.114 "seek_data": false, 00:15:11.114 "copy": true, 00:15:11.114 "nvme_iov_md": false 00:15:11.114 }, 00:15:11.114 "memory_domains": [ 00:15:11.114 { 00:15:11.114 "dma_device_id": "system", 00:15:11.114 "dma_device_type": 1 00:15:11.114 }, 00:15:11.114 { 00:15:11.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.114 "dma_device_type": 2 00:15:11.114 } 00:15:11.114 ], 00:15:11.114 "driver_specific": { 00:15:11.114 "passthru": { 00:15:11.114 "name": "pt1", 00:15:11.114 "base_bdev_name": "malloc1" 00:15:11.114 } 00:15:11.114 } 00:15:11.114 }' 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.114 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:11.373 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.633 "name": "pt2", 00:15:11.633 "aliases": [ 00:15:11.633 "00000000-0000-0000-0000-000000000002" 00:15:11.633 ], 00:15:11.633 "product_name": "passthru", 00:15:11.633 "block_size": 512, 00:15:11.633 "num_blocks": 65536, 00:15:11.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.633 "assigned_rate_limits": { 00:15:11.633 "rw_ios_per_sec": 0, 00:15:11.633 "rw_mbytes_per_sec": 0, 00:15:11.633 "r_mbytes_per_sec": 0, 00:15:11.633 "w_mbytes_per_sec": 0 00:15:11.633 }, 00:15:11.633 "claimed": true, 00:15:11.633 "claim_type": "exclusive_write", 00:15:11.633 "zoned": false, 00:15:11.633 "supported_io_types": { 00:15:11.633 "read": true, 00:15:11.633 "write": true, 00:15:11.633 "unmap": true, 00:15:11.633 "flush": true, 00:15:11.633 "reset": true, 00:15:11.633 "nvme_admin": false, 00:15:11.633 "nvme_io": false, 00:15:11.633 "nvme_io_md": false, 00:15:11.633 "write_zeroes": true, 00:15:11.633 "zcopy": true, 00:15:11.633 "get_zone_info": false, 00:15:11.633 "zone_management": false, 00:15:11.633 "zone_append": false, 00:15:11.633 "compare": false, 00:15:11.633 "compare_and_write": false, 00:15:11.633 "abort": true, 00:15:11.633 "seek_hole": false, 00:15:11.633 "seek_data": false, 00:15:11.633 "copy": true, 00:15:11.633 "nvme_iov_md": false 00:15:11.633 }, 00:15:11.633 "memory_domains": [ 00:15:11.633 { 00:15:11.633 "dma_device_id": "system", 00:15:11.633 "dma_device_type": 1 00:15:11.633 }, 00:15:11.633 { 00:15:11.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.633 "dma_device_type": 2 00:15:11.633 } 00:15:11.633 ], 00:15:11.633 "driver_specific": { 00:15:11.633 "passthru": { 00:15:11.633 "name": "pt2", 00:15:11.633 "base_bdev_name": "malloc2" 00:15:11.633 } 00:15:11.633 } 00:15:11.633 }' 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.633 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:11.892 11:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.152 "name": "pt3", 00:15:12.152 "aliases": [ 00:15:12.152 "00000000-0000-0000-0000-000000000003" 00:15:12.152 ], 00:15:12.152 "product_name": "passthru", 00:15:12.152 "block_size": 512, 00:15:12.152 "num_blocks": 65536, 00:15:12.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.152 "assigned_rate_limits": { 00:15:12.152 "rw_ios_per_sec": 0, 00:15:12.152 "rw_mbytes_per_sec": 0, 00:15:12.152 "r_mbytes_per_sec": 0, 00:15:12.152 "w_mbytes_per_sec": 0 00:15:12.152 }, 00:15:12.152 "claimed": true, 00:15:12.152 "claim_type": "exclusive_write", 00:15:12.152 "zoned": false, 00:15:12.152 "supported_io_types": { 00:15:12.152 "read": true, 00:15:12.152 "write": true, 00:15:12.152 "unmap": true, 00:15:12.152 "flush": true, 00:15:12.152 "reset": true, 00:15:12.152 "nvme_admin": false, 00:15:12.152 "nvme_io": false, 00:15:12.152 "nvme_io_md": false, 00:15:12.152 "write_zeroes": true, 00:15:12.152 "zcopy": true, 00:15:12.152 "get_zone_info": false, 00:15:12.152 "zone_management": false, 00:15:12.152 "zone_append": false, 00:15:12.152 "compare": false, 00:15:12.152 "compare_and_write": false, 00:15:12.152 "abort": true, 00:15:12.152 "seek_hole": false, 00:15:12.152 "seek_data": false, 00:15:12.152 "copy": true, 00:15:12.152 "nvme_iov_md": false 00:15:12.152 }, 00:15:12.152 "memory_domains": [ 00:15:12.152 { 00:15:12.152 "dma_device_id": "system", 00:15:12.152 "dma_device_type": 1 00:15:12.152 }, 00:15:12.152 { 00:15:12.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.152 "dma_device_type": 2 00:15:12.152 } 00:15:12.152 ], 00:15:12.152 "driver_specific": { 00:15:12.152 "passthru": { 00:15:12.152 "name": "pt3", 00:15:12.152 "base_bdev_name": "malloc3" 00:15:12.152 } 00:15:12.152 } 00:15:12.152 }' 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.152 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:12.411 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:12.670 [2024-07-25 11:56:58.598746] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4e6c70da-33a2-4065-b2b5-54cf4b7db1d8 '!=' 4e6c70da-33a2-4065-b2b5-54cf4b7db1d8 ']' 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4143623 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4143623 ']' 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4143623 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4143623 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4143623' 00:15:12.670 killing process with pid 4143623 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4143623 00:15:12.670 [2024-07-25 11:56:58.666299] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.670 [2024-07-25 11:56:58.666356] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.670 [2024-07-25 11:56:58.666405] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.670 [2024-07-25 11:56:58.666415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf04de0 name raid_bdev1, state offline 00:15:12.670 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4143623 00:15:12.670 [2024-07-25 11:56:58.690816] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.930 11:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:12.930 00:15:12.930 real 0m13.281s 00:15:12.930 user 0m23.868s 00:15:12.930 sys 0m2.458s 00:15:12.930 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.930 11:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.930 ************************************ 00:15:12.930 END TEST raid_superblock_test 00:15:12.930 ************************************ 00:15:12.930 11:56:58 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:15:12.930 11:56:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:12.930 11:56:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.930 11:56:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.930 ************************************ 00:15:12.930 START TEST raid_read_error_test 00:15:12.930 ************************************ 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.2rgwUV9vZy 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4146039 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4146039 /var/tmp/spdk-raid.sock 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4146039 ']' 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.930 11:56:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.930 [2024-07-25 11:56:59.041353] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:12.930 [2024-07-25 11:56:59.041409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146039 ] 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:13.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.190 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:13.190 [2024-07-25 11:56:59.172488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.190 [2024-07-25 11:56:59.259706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.449 [2024-07-25 11:56:59.324957] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.449 [2024-07-25 11:56:59.324990] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.017 11:56:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.017 11:56:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:14.017 11:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.017 11:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.276 BaseBdev1_malloc 00:15:14.276 11:57:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:14.845 true 00:15:14.845 11:57:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:14.845 [2024-07-25 11:57:00.900225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:14.845 [2024-07-25 11:57:00.900267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.845 [2024-07-25 11:57:00.900285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8b190 00:15:14.845 [2024-07-25 11:57:00.900297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.845 [2024-07-25 11:57:00.901876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.845 [2024-07-25 11:57:00.901903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.845 BaseBdev1 00:15:14.845 11:57:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.845 11:57:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.104 BaseBdev2_malloc 00:15:15.104 11:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:15.363 true 00:15:15.363 11:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:15.623 [2024-07-25 11:57:01.574467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:15.623 [2024-07-25 11:57:01.574505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.623 [2024-07-25 11:57:01.574522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8fe20 00:15:15.623 [2024-07-25 11:57:01.574533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.623 [2024-07-25 11:57:01.575860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.623 [2024-07-25 11:57:01.575886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.623 BaseBdev2 00:15:15.623 11:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:15.623 11:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:15.882 BaseBdev3_malloc 00:15:15.882 11:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:16.142 true 00:15:16.142 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:16.142 [2024-07-25 11:57:02.256515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:16.142 [2024-07-25 11:57:02.256552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.142 [2024-07-25 11:57:02.256572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f90d90 00:15:16.142 [2024-07-25 11:57:02.256583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.142 [2024-07-25 11:57:02.257943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.142 [2024-07-25 11:57:02.257968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.401 BaseBdev3 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:15:16.401 [2024-07-25 11:57:02.485155] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.401 [2024-07-25 11:57:02.486312] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.401 [2024-07-25 11:57:02.486377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.401 [2024-07-25 11:57:02.486565] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f92ba0 00:15:16.401 [2024-07-25 11:57:02.486576] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:16.401 [2024-07-25 11:57:02.486749] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1de6af0 00:15:16.401 [2024-07-25 11:57:02.486888] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f92ba0 00:15:16.401 [2024-07-25 11:57:02.486897] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f92ba0 00:15:16.401 [2024-07-25 11:57:02.486990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.401 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.660 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.660 "name": "raid_bdev1", 00:15:16.660 "uuid": "b4616110-f9a4-453d-8380-4ec6362e365e", 00:15:16.660 "strip_size_kb": 64, 00:15:16.660 "state": "online", 00:15:16.660 "raid_level": "concat", 00:15:16.660 "superblock": true, 00:15:16.660 "num_base_bdevs": 3, 00:15:16.660 "num_base_bdevs_discovered": 3, 00:15:16.660 "num_base_bdevs_operational": 3, 00:15:16.660 "base_bdevs_list": [ 00:15:16.660 { 00:15:16.660 "name": "BaseBdev1", 00:15:16.660 "uuid": "31de3e92-8782-5bb0-b0be-2a09571d8e5b", 00:15:16.660 "is_configured": true, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 }, 00:15:16.660 { 00:15:16.660 "name": "BaseBdev2", 00:15:16.660 "uuid": "46ec3d2b-fb63-5413-af1e-9a2814768732", 00:15:16.660 "is_configured": true, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 }, 00:15:16.660 { 00:15:16.660 "name": "BaseBdev3", 00:15:16.660 "uuid": "490dce5e-9e46-505f-801d-bcc3a5c8345e", 00:15:16.660 "is_configured": true, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 } 00:15:16.661 ] 00:15:16.661 }' 00:15:16.661 11:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.661 11:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.229 11:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:17.229 11:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:17.488 [2024-07-25 11:57:03.399794] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ae56c0 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.425 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.685 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.685 "name": "raid_bdev1", 00:15:18.685 "uuid": "b4616110-f9a4-453d-8380-4ec6362e365e", 00:15:18.685 "strip_size_kb": 64, 00:15:18.685 "state": "online", 00:15:18.685 "raid_level": "concat", 00:15:18.685 "superblock": true, 00:15:18.685 "num_base_bdevs": 3, 00:15:18.685 "num_base_bdevs_discovered": 3, 00:15:18.685 "num_base_bdevs_operational": 3, 00:15:18.685 "base_bdevs_list": [ 00:15:18.685 { 00:15:18.685 "name": "BaseBdev1", 00:15:18.685 "uuid": "31de3e92-8782-5bb0-b0be-2a09571d8e5b", 00:15:18.685 "is_configured": true, 00:15:18.685 "data_offset": 2048, 00:15:18.685 "data_size": 63488 00:15:18.685 }, 00:15:18.685 { 00:15:18.685 "name": "BaseBdev2", 00:15:18.685 "uuid": "46ec3d2b-fb63-5413-af1e-9a2814768732", 00:15:18.685 "is_configured": true, 00:15:18.685 "data_offset": 2048, 00:15:18.685 "data_size": 63488 00:15:18.685 }, 00:15:18.685 { 00:15:18.685 "name": "BaseBdev3", 00:15:18.685 "uuid": "490dce5e-9e46-505f-801d-bcc3a5c8345e", 00:15:18.685 "is_configured": true, 00:15:18.685 "data_offset": 2048, 00:15:18.685 "data_size": 63488 00:15:18.685 } 00:15:18.685 ] 00:15:18.685 }' 00:15:18.685 11:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.685 11:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.253 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:19.512 [2024-07-25 11:57:05.482348] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.512 [2024-07-25 11:57:05.482378] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.512 [2024-07-25 11:57:05.485284] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.512 [2024-07-25 11:57:05.485317] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.512 [2024-07-25 11:57:05.485346] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.512 [2024-07-25 11:57:05.485357] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f92ba0 name raid_bdev1, state offline 00:15:19.512 0 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4146039 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4146039 ']' 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4146039 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4146039 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4146039' 00:15:19.512 killing process with pid 4146039 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4146039 00:15:19.512 [2024-07-25 11:57:05.560081] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.512 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4146039 00:15:19.512 [2024-07-25 11:57:05.578945] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.2rgwUV9vZy 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:15:19.771 00:15:19.771 real 0m6.819s 00:15:19.771 user 0m10.769s 00:15:19.771 sys 0m1.185s 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.771 11:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.771 ************************************ 00:15:19.771 END TEST raid_read_error_test 00:15:19.771 ************************************ 00:15:19.771 11:57:05 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:15:19.771 11:57:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:19.771 11:57:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.771 11:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.771 ************************************ 00:15:19.771 START TEST raid_write_error_test 00:15:19.771 ************************************ 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:19.771 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WCyCyr9wiy 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4147865 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4147865 /var/tmp/spdk-raid.sock 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4147865 ']' 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.030 11:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.030 [2024-07-25 11:57:05.994207] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:20.030 [2024-07-25 11:57:05.994349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147865 ] 00:15:20.030 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.030 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:20.030 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.030 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:20.030 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.030 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:20.030 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.030 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:20.030 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.030 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:20.030 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.030 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:20.031 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:20.031 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:20.290 [2024-07-25 11:57:06.199948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.290 [2024-07-25 11:57:06.282970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.290 [2024-07-25 11:57:06.340930] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.290 [2024-07-25 11:57:06.340965] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.856 11:57:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.856 11:57:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:20.856 11:57:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:20.856 11:57:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:21.114 BaseBdev1_malloc 00:15:21.114 11:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:21.373 true 00:15:21.373 11:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:21.631 [2024-07-25 11:57:07.493958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:21.631 [2024-07-25 11:57:07.493999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.631 [2024-07-25 11:57:07.494017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c66190 00:15:21.631 [2024-07-25 11:57:07.494029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.631 [2024-07-25 11:57:07.495644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.631 [2024-07-25 11:57:07.495671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:21.631 BaseBdev1 00:15:21.631 11:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:21.631 11:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:21.631 BaseBdev2_malloc 00:15:21.631 11:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:21.889 true 00:15:21.889 11:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:22.148 [2024-07-25 11:57:08.179970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:22.148 [2024-07-25 11:57:08.180008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.148 [2024-07-25 11:57:08.180026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6ae20 00:15:22.148 [2024-07-25 11:57:08.180037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.148 [2024-07-25 11:57:08.181417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.148 [2024-07-25 11:57:08.181442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:22.148 BaseBdev2 00:15:22.148 11:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:22.148 11:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:22.407 BaseBdev3_malloc 00:15:22.407 11:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:22.665 true 00:15:22.665 11:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:22.923 [2024-07-25 11:57:08.861962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:22.923 [2024-07-25 11:57:08.862002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.923 [2024-07-25 11:57:08.862023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6bd90 00:15:22.923 [2024-07-25 11:57:08.862034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.923 [2024-07-25 11:57:08.863436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.923 [2024-07-25 11:57:08.863462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:22.923 BaseBdev3 00:15:22.923 11:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:15:23.182 [2024-07-25 11:57:09.090600] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.182 [2024-07-25 11:57:09.091768] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.182 [2024-07-25 11:57:09.091832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.182 [2024-07-25 11:57:09.092019] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c6dba0 00:15:23.182 [2024-07-25 11:57:09.092030] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:23.182 [2024-07-25 11:57:09.092212] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac1af0 00:15:23.182 [2024-07-25 11:57:09.092349] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c6dba0 00:15:23.182 [2024-07-25 11:57:09.092358] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c6dba0 00:15:23.182 [2024-07-25 11:57:09.092455] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.182 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.440 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.440 "name": "raid_bdev1", 00:15:23.440 "uuid": "d44e1a4a-14fc-4af3-8396-e23df1b3f1fa", 00:15:23.440 "strip_size_kb": 64, 00:15:23.440 "state": "online", 00:15:23.440 "raid_level": "concat", 00:15:23.440 "superblock": true, 00:15:23.440 "num_base_bdevs": 3, 00:15:23.440 "num_base_bdevs_discovered": 3, 00:15:23.440 "num_base_bdevs_operational": 3, 00:15:23.440 "base_bdevs_list": [ 00:15:23.440 { 00:15:23.440 "name": "BaseBdev1", 00:15:23.440 "uuid": "794b9348-81a4-57da-aa84-579bc5ac19c8", 00:15:23.440 "is_configured": true, 00:15:23.440 "data_offset": 2048, 00:15:23.440 "data_size": 63488 00:15:23.440 }, 00:15:23.440 { 00:15:23.440 "name": "BaseBdev2", 00:15:23.440 "uuid": "6412e5e9-aa88-5ef8-9411-a4ba8efcd082", 00:15:23.440 "is_configured": true, 00:15:23.440 "data_offset": 2048, 00:15:23.440 "data_size": 63488 00:15:23.440 }, 00:15:23.440 { 00:15:23.440 "name": "BaseBdev3", 00:15:23.440 "uuid": "763c5c1a-8b62-5632-b080-17ff20f12ed3", 00:15:23.440 "is_configured": true, 00:15:23.440 "data_offset": 2048, 00:15:23.440 "data_size": 63488 00:15:23.440 } 00:15:23.440 ] 00:15:23.440 }' 00:15:23.440 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.440 11:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.020 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:24.020 11:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:24.020 [2024-07-25 11:57:09.973225] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17c06c0 00:15:24.953 11:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.211 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.472 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.472 "name": "raid_bdev1", 00:15:25.472 "uuid": "d44e1a4a-14fc-4af3-8396-e23df1b3f1fa", 00:15:25.472 "strip_size_kb": 64, 00:15:25.472 "state": "online", 00:15:25.472 "raid_level": "concat", 00:15:25.472 "superblock": true, 00:15:25.472 "num_base_bdevs": 3, 00:15:25.472 "num_base_bdevs_discovered": 3, 00:15:25.472 "num_base_bdevs_operational": 3, 00:15:25.472 "base_bdevs_list": [ 00:15:25.472 { 00:15:25.472 "name": "BaseBdev1", 00:15:25.472 "uuid": "794b9348-81a4-57da-aa84-579bc5ac19c8", 00:15:25.472 "is_configured": true, 00:15:25.472 "data_offset": 2048, 00:15:25.472 "data_size": 63488 00:15:25.472 }, 00:15:25.472 { 00:15:25.472 "name": "BaseBdev2", 00:15:25.472 "uuid": "6412e5e9-aa88-5ef8-9411-a4ba8efcd082", 00:15:25.472 "is_configured": true, 00:15:25.472 "data_offset": 2048, 00:15:25.472 "data_size": 63488 00:15:25.472 }, 00:15:25.472 { 00:15:25.472 "name": "BaseBdev3", 00:15:25.472 "uuid": "763c5c1a-8b62-5632-b080-17ff20f12ed3", 00:15:25.472 "is_configured": true, 00:15:25.472 "data_offset": 2048, 00:15:25.472 "data_size": 63488 00:15:25.472 } 00:15:25.472 ] 00:15:25.472 }' 00:15:25.472 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.472 11:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.038 11:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:26.038 [2024-07-25 11:57:12.143913] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.038 [2024-07-25 11:57:12.143945] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.038 [2024-07-25 11:57:12.146857] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.038 [2024-07-25 11:57:12.146890] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.038 [2024-07-25 11:57:12.146919] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.038 [2024-07-25 11:57:12.146930] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c6dba0 name raid_bdev1, state offline 00:15:26.038 0 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4147865 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4147865 ']' 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4147865 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4147865 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4147865' 00:15:26.296 killing process with pid 4147865 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4147865 00:15:26.296 [2024-07-25 11:57:12.223280] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.296 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4147865 00:15:26.296 [2024-07-25 11:57:12.241508] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.553 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WCyCyr9wiy 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:15:26.554 00:15:26.554 real 0m6.575s 00:15:26.554 user 0m10.349s 00:15:26.554 sys 0m1.163s 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.554 11:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.554 ************************************ 00:15:26.554 END TEST raid_write_error_test 00:15:26.554 ************************************ 00:15:26.554 11:57:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:26.554 11:57:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:26.554 11:57:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:26.554 11:57:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.554 11:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.554 ************************************ 00:15:26.554 START TEST raid_state_function_test 00:15:26.554 ************************************ 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4149146 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4149146' 00:15:26.554 Process raid pid: 4149146 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4149146 /var/tmp/spdk-raid.sock 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4149146 ']' 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:26.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.554 11:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.554 [2024-07-25 11:57:12.588647] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:26.554 [2024-07-25 11:57:12.588703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:26.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.554 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:26.813 [2024-07-25 11:57:12.721591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.813 [2024-07-25 11:57:12.809795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.813 [2024-07-25 11:57:12.869288] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.813 [2024-07-25 11:57:12.869318] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.071 11:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.071 11:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:27.071 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:27.328 [2024-07-25 11:57:13.274285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.328 [2024-07-25 11:57:13.274317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.328 [2024-07-25 11:57:13.274327] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.328 [2024-07-25 11:57:13.274338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.328 [2024-07-25 11:57:13.274346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.328 [2024-07-25 11:57:13.274356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.328 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.585 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.585 "name": "Existed_Raid", 00:15:27.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.585 "strip_size_kb": 0, 00:15:27.585 "state": "configuring", 00:15:27.585 "raid_level": "raid1", 00:15:27.585 "superblock": false, 00:15:27.585 "num_base_bdevs": 3, 00:15:27.585 "num_base_bdevs_discovered": 0, 00:15:27.585 "num_base_bdevs_operational": 3, 00:15:27.585 "base_bdevs_list": [ 00:15:27.585 { 00:15:27.585 "name": "BaseBdev1", 00:15:27.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.585 "is_configured": false, 00:15:27.585 "data_offset": 0, 00:15:27.585 "data_size": 0 00:15:27.585 }, 00:15:27.585 { 00:15:27.585 "name": "BaseBdev2", 00:15:27.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.585 "is_configured": false, 00:15:27.585 "data_offset": 0, 00:15:27.585 "data_size": 0 00:15:27.585 }, 00:15:27.585 { 00:15:27.585 "name": "BaseBdev3", 00:15:27.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.585 "is_configured": false, 00:15:27.585 "data_offset": 0, 00:15:27.585 "data_size": 0 00:15:27.585 } 00:15:27.585 ] 00:15:27.585 }' 00:15:27.585 11:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.585 11:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.148 11:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.406 [2024-07-25 11:57:14.296866] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.406 [2024-07-25 11:57:14.296891] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1652f40 name Existed_Raid, state configuring 00:15:28.406 11:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:28.406 [2024-07-25 11:57:14.521465] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.406 [2024-07-25 11:57:14.521492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.406 [2024-07-25 11:57:14.521502] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.406 [2024-07-25 11:57:14.521512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.406 [2024-07-25 11:57:14.521520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.406 [2024-07-25 11:57:14.521530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.665 [2024-07-25 11:57:14.743521] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.665 BaseBdev1 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.665 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.923 11:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.182 [ 00:15:29.182 { 00:15:29.182 "name": "BaseBdev1", 00:15:29.182 "aliases": [ 00:15:29.182 "3dded469-fe7e-41a6-bc48-8c75764a277d" 00:15:29.182 ], 00:15:29.182 "product_name": "Malloc disk", 00:15:29.182 "block_size": 512, 00:15:29.182 "num_blocks": 65536, 00:15:29.182 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:29.182 "assigned_rate_limits": { 00:15:29.182 "rw_ios_per_sec": 0, 00:15:29.182 "rw_mbytes_per_sec": 0, 00:15:29.182 "r_mbytes_per_sec": 0, 00:15:29.182 "w_mbytes_per_sec": 0 00:15:29.182 }, 00:15:29.182 "claimed": true, 00:15:29.182 "claim_type": "exclusive_write", 00:15:29.182 "zoned": false, 00:15:29.182 "supported_io_types": { 00:15:29.182 "read": true, 00:15:29.182 "write": true, 00:15:29.182 "unmap": true, 00:15:29.182 "flush": true, 00:15:29.182 "reset": true, 00:15:29.182 "nvme_admin": false, 00:15:29.182 "nvme_io": false, 00:15:29.182 "nvme_io_md": false, 00:15:29.182 "write_zeroes": true, 00:15:29.182 "zcopy": true, 00:15:29.182 "get_zone_info": false, 00:15:29.182 "zone_management": false, 00:15:29.182 "zone_append": false, 00:15:29.182 "compare": false, 00:15:29.182 "compare_and_write": false, 00:15:29.182 "abort": true, 00:15:29.182 "seek_hole": false, 00:15:29.182 "seek_data": false, 00:15:29.182 "copy": true, 00:15:29.182 "nvme_iov_md": false 00:15:29.182 }, 00:15:29.182 "memory_domains": [ 00:15:29.182 { 00:15:29.182 "dma_device_id": "system", 00:15:29.182 "dma_device_type": 1 00:15:29.182 }, 00:15:29.182 { 00:15:29.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.182 "dma_device_type": 2 00:15:29.182 } 00:15:29.182 ], 00:15:29.182 "driver_specific": {} 00:15:29.182 } 00:15:29.182 ] 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.182 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.441 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.441 "name": "Existed_Raid", 00:15:29.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.441 "strip_size_kb": 0, 00:15:29.441 "state": "configuring", 00:15:29.441 "raid_level": "raid1", 00:15:29.441 "superblock": false, 00:15:29.441 "num_base_bdevs": 3, 00:15:29.441 "num_base_bdevs_discovered": 1, 00:15:29.441 "num_base_bdevs_operational": 3, 00:15:29.441 "base_bdevs_list": [ 00:15:29.441 { 00:15:29.441 "name": "BaseBdev1", 00:15:29.441 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:29.441 "is_configured": true, 00:15:29.441 "data_offset": 0, 00:15:29.441 "data_size": 65536 00:15:29.441 }, 00:15:29.441 { 00:15:29.441 "name": "BaseBdev2", 00:15:29.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.441 "is_configured": false, 00:15:29.441 "data_offset": 0, 00:15:29.441 "data_size": 0 00:15:29.441 }, 00:15:29.441 { 00:15:29.441 "name": "BaseBdev3", 00:15:29.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.441 "is_configured": false, 00:15:29.441 "data_offset": 0, 00:15:29.441 "data_size": 0 00:15:29.441 } 00:15:29.441 ] 00:15:29.441 }' 00:15:29.441 11:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.441 11:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.007 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.265 [2024-07-25 11:57:16.215386] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.265 [2024-07-25 11:57:16.215420] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1652810 name Existed_Raid, state configuring 00:15:30.265 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.524 [2024-07-25 11:57:16.440005] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.524 [2024-07-25 11:57:16.441472] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.524 [2024-07-25 11:57:16.441503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.524 [2024-07-25 11:57:16.441512] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.524 [2024-07-25 11:57:16.441523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.524 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.783 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.783 "name": "Existed_Raid", 00:15:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.783 "strip_size_kb": 0, 00:15:30.783 "state": "configuring", 00:15:30.783 "raid_level": "raid1", 00:15:30.783 "superblock": false, 00:15:30.783 "num_base_bdevs": 3, 00:15:30.783 "num_base_bdevs_discovered": 1, 00:15:30.783 "num_base_bdevs_operational": 3, 00:15:30.783 "base_bdevs_list": [ 00:15:30.783 { 00:15:30.783 "name": "BaseBdev1", 00:15:30.783 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:30.783 "is_configured": true, 00:15:30.783 "data_offset": 0, 00:15:30.783 "data_size": 65536 00:15:30.783 }, 00:15:30.783 { 00:15:30.783 "name": "BaseBdev2", 00:15:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.783 "is_configured": false, 00:15:30.783 "data_offset": 0, 00:15:30.783 "data_size": 0 00:15:30.783 }, 00:15:30.783 { 00:15:30.783 "name": "BaseBdev3", 00:15:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.783 "is_configured": false, 00:15:30.783 "data_offset": 0, 00:15:30.783 "data_size": 0 00:15:30.783 } 00:15:30.783 ] 00:15:30.783 }' 00:15:30.783 11:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.783 11:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.347 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.604 [2024-07-25 11:57:17.493955] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.604 BaseBdev2 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.604 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.861 [ 00:15:31.861 { 00:15:31.861 "name": "BaseBdev2", 00:15:31.861 "aliases": [ 00:15:31.861 "a71c67b5-89f8-46f6-8834-c3ad1c7a148c" 00:15:31.861 ], 00:15:31.861 "product_name": "Malloc disk", 00:15:31.861 "block_size": 512, 00:15:31.861 "num_blocks": 65536, 00:15:31.861 "uuid": "a71c67b5-89f8-46f6-8834-c3ad1c7a148c", 00:15:31.861 "assigned_rate_limits": { 00:15:31.861 "rw_ios_per_sec": 0, 00:15:31.861 "rw_mbytes_per_sec": 0, 00:15:31.861 "r_mbytes_per_sec": 0, 00:15:31.861 "w_mbytes_per_sec": 0 00:15:31.861 }, 00:15:31.861 "claimed": true, 00:15:31.861 "claim_type": "exclusive_write", 00:15:31.861 "zoned": false, 00:15:31.861 "supported_io_types": { 00:15:31.861 "read": true, 00:15:31.861 "write": true, 00:15:31.861 "unmap": true, 00:15:31.861 "flush": true, 00:15:31.861 "reset": true, 00:15:31.861 "nvme_admin": false, 00:15:31.861 "nvme_io": false, 00:15:31.861 "nvme_io_md": false, 00:15:31.861 "write_zeroes": true, 00:15:31.861 "zcopy": true, 00:15:31.861 "get_zone_info": false, 00:15:31.861 "zone_management": false, 00:15:31.861 "zone_append": false, 00:15:31.861 "compare": false, 00:15:31.861 "compare_and_write": false, 00:15:31.861 "abort": true, 00:15:31.861 "seek_hole": false, 00:15:31.861 "seek_data": false, 00:15:31.861 "copy": true, 00:15:31.861 "nvme_iov_md": false 00:15:31.861 }, 00:15:31.861 "memory_domains": [ 00:15:31.861 { 00:15:31.861 "dma_device_id": "system", 00:15:31.861 "dma_device_type": 1 00:15:31.861 }, 00:15:31.861 { 00:15:31.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.861 "dma_device_type": 2 00:15:31.861 } 00:15:31.861 ], 00:15:31.861 "driver_specific": {} 00:15:31.861 } 00:15:31.861 ] 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.861 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.862 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.862 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.862 11:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.118 11:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.118 "name": "Existed_Raid", 00:15:32.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.118 "strip_size_kb": 0, 00:15:32.118 "state": "configuring", 00:15:32.118 "raid_level": "raid1", 00:15:32.118 "superblock": false, 00:15:32.118 "num_base_bdevs": 3, 00:15:32.118 "num_base_bdevs_discovered": 2, 00:15:32.118 "num_base_bdevs_operational": 3, 00:15:32.118 "base_bdevs_list": [ 00:15:32.118 { 00:15:32.118 "name": "BaseBdev1", 00:15:32.118 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:32.118 "is_configured": true, 00:15:32.118 "data_offset": 0, 00:15:32.118 "data_size": 65536 00:15:32.118 }, 00:15:32.118 { 00:15:32.118 "name": "BaseBdev2", 00:15:32.118 "uuid": "a71c67b5-89f8-46f6-8834-c3ad1c7a148c", 00:15:32.118 "is_configured": true, 00:15:32.118 "data_offset": 0, 00:15:32.118 "data_size": 65536 00:15:32.118 }, 00:15:32.118 { 00:15:32.118 "name": "BaseBdev3", 00:15:32.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.118 "is_configured": false, 00:15:32.118 "data_offset": 0, 00:15:32.118 "data_size": 0 00:15:32.118 } 00:15:32.118 ] 00:15:32.118 }' 00:15:32.118 11:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.118 11:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.685 11:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:32.942 [2024-07-25 11:57:19.009203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.942 [2024-07-25 11:57:19.009237] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1653700 00:15:32.942 [2024-07-25 11:57:19.009245] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:32.942 [2024-07-25 11:57:19.009419] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16533d0 00:15:32.942 [2024-07-25 11:57:19.009533] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1653700 00:15:32.942 [2024-07-25 11:57:19.009542] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1653700 00:15:32.942 [2024-07-25 11:57:19.009690] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.942 BaseBdev3 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.942 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.200 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.460 [ 00:15:33.460 { 00:15:33.460 "name": "BaseBdev3", 00:15:33.460 "aliases": [ 00:15:33.460 "2aead6ee-79c7-4755-b9ce-abd760f6d41b" 00:15:33.460 ], 00:15:33.460 "product_name": "Malloc disk", 00:15:33.460 "block_size": 512, 00:15:33.460 "num_blocks": 65536, 00:15:33.460 "uuid": "2aead6ee-79c7-4755-b9ce-abd760f6d41b", 00:15:33.460 "assigned_rate_limits": { 00:15:33.460 "rw_ios_per_sec": 0, 00:15:33.460 "rw_mbytes_per_sec": 0, 00:15:33.460 "r_mbytes_per_sec": 0, 00:15:33.460 "w_mbytes_per_sec": 0 00:15:33.460 }, 00:15:33.460 "claimed": true, 00:15:33.460 "claim_type": "exclusive_write", 00:15:33.460 "zoned": false, 00:15:33.460 "supported_io_types": { 00:15:33.460 "read": true, 00:15:33.460 "write": true, 00:15:33.460 "unmap": true, 00:15:33.460 "flush": true, 00:15:33.460 "reset": true, 00:15:33.460 "nvme_admin": false, 00:15:33.460 "nvme_io": false, 00:15:33.460 "nvme_io_md": false, 00:15:33.460 "write_zeroes": true, 00:15:33.460 "zcopy": true, 00:15:33.460 "get_zone_info": false, 00:15:33.460 "zone_management": false, 00:15:33.460 "zone_append": false, 00:15:33.460 "compare": false, 00:15:33.460 "compare_and_write": false, 00:15:33.460 "abort": true, 00:15:33.460 "seek_hole": false, 00:15:33.460 "seek_data": false, 00:15:33.460 "copy": true, 00:15:33.460 "nvme_iov_md": false 00:15:33.460 }, 00:15:33.460 "memory_domains": [ 00:15:33.460 { 00:15:33.460 "dma_device_id": "system", 00:15:33.460 "dma_device_type": 1 00:15:33.460 }, 00:15:33.460 { 00:15:33.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.460 "dma_device_type": 2 00:15:33.460 } 00:15:33.460 ], 00:15:33.460 "driver_specific": {} 00:15:33.460 } 00:15:33.460 ] 00:15:33.460 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:33.460 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:33.460 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:33.460 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.461 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.718 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.718 "name": "Existed_Raid", 00:15:33.718 "uuid": "73fdffc8-a53c-4dfa-8587-bbe275b4d525", 00:15:33.718 "strip_size_kb": 0, 00:15:33.718 "state": "online", 00:15:33.718 "raid_level": "raid1", 00:15:33.718 "superblock": false, 00:15:33.718 "num_base_bdevs": 3, 00:15:33.718 "num_base_bdevs_discovered": 3, 00:15:33.718 "num_base_bdevs_operational": 3, 00:15:33.718 "base_bdevs_list": [ 00:15:33.718 { 00:15:33.718 "name": "BaseBdev1", 00:15:33.718 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:33.718 "is_configured": true, 00:15:33.718 "data_offset": 0, 00:15:33.718 "data_size": 65536 00:15:33.718 }, 00:15:33.718 { 00:15:33.718 "name": "BaseBdev2", 00:15:33.718 "uuid": "a71c67b5-89f8-46f6-8834-c3ad1c7a148c", 00:15:33.718 "is_configured": true, 00:15:33.718 "data_offset": 0, 00:15:33.718 "data_size": 65536 00:15:33.718 }, 00:15:33.718 { 00:15:33.718 "name": "BaseBdev3", 00:15:33.718 "uuid": "2aead6ee-79c7-4755-b9ce-abd760f6d41b", 00:15:33.718 "is_configured": true, 00:15:33.718 "data_offset": 0, 00:15:33.718 "data_size": 65536 00:15:33.718 } 00:15:33.718 ] 00:15:33.718 }' 00:15:33.718 11:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.718 11:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:34.284 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:34.542 [2024-07-25 11:57:20.481491] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.542 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:34.542 "name": "Existed_Raid", 00:15:34.542 "aliases": [ 00:15:34.542 "73fdffc8-a53c-4dfa-8587-bbe275b4d525" 00:15:34.542 ], 00:15:34.542 "product_name": "Raid Volume", 00:15:34.542 "block_size": 512, 00:15:34.542 "num_blocks": 65536, 00:15:34.542 "uuid": "73fdffc8-a53c-4dfa-8587-bbe275b4d525", 00:15:34.542 "assigned_rate_limits": { 00:15:34.542 "rw_ios_per_sec": 0, 00:15:34.542 "rw_mbytes_per_sec": 0, 00:15:34.542 "r_mbytes_per_sec": 0, 00:15:34.542 "w_mbytes_per_sec": 0 00:15:34.542 }, 00:15:34.542 "claimed": false, 00:15:34.542 "zoned": false, 00:15:34.542 "supported_io_types": { 00:15:34.542 "read": true, 00:15:34.542 "write": true, 00:15:34.542 "unmap": false, 00:15:34.542 "flush": false, 00:15:34.542 "reset": true, 00:15:34.542 "nvme_admin": false, 00:15:34.542 "nvme_io": false, 00:15:34.542 "nvme_io_md": false, 00:15:34.542 "write_zeroes": true, 00:15:34.542 "zcopy": false, 00:15:34.542 "get_zone_info": false, 00:15:34.542 "zone_management": false, 00:15:34.542 "zone_append": false, 00:15:34.542 "compare": false, 00:15:34.542 "compare_and_write": false, 00:15:34.542 "abort": false, 00:15:34.542 "seek_hole": false, 00:15:34.542 "seek_data": false, 00:15:34.542 "copy": false, 00:15:34.542 "nvme_iov_md": false 00:15:34.542 }, 00:15:34.542 "memory_domains": [ 00:15:34.542 { 00:15:34.542 "dma_device_id": "system", 00:15:34.542 "dma_device_type": 1 00:15:34.542 }, 00:15:34.542 { 00:15:34.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.542 "dma_device_type": 2 00:15:34.542 }, 00:15:34.542 { 00:15:34.542 "dma_device_id": "system", 00:15:34.542 "dma_device_type": 1 00:15:34.542 }, 00:15:34.542 { 00:15:34.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.542 "dma_device_type": 2 00:15:34.542 }, 00:15:34.542 { 00:15:34.542 "dma_device_id": "system", 00:15:34.542 "dma_device_type": 1 00:15:34.542 }, 00:15:34.542 { 00:15:34.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.542 "dma_device_type": 2 00:15:34.542 } 00:15:34.542 ], 00:15:34.542 "driver_specific": { 00:15:34.542 "raid": { 00:15:34.542 "uuid": "73fdffc8-a53c-4dfa-8587-bbe275b4d525", 00:15:34.542 "strip_size_kb": 0, 00:15:34.542 "state": "online", 00:15:34.542 "raid_level": "raid1", 00:15:34.542 "superblock": false, 00:15:34.542 "num_base_bdevs": 3, 00:15:34.542 "num_base_bdevs_discovered": 3, 00:15:34.542 "num_base_bdevs_operational": 3, 00:15:34.542 "base_bdevs_list": [ 00:15:34.542 { 00:15:34.542 "name": "BaseBdev1", 00:15:34.542 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:34.542 "is_configured": true, 00:15:34.542 "data_offset": 0, 00:15:34.542 "data_size": 65536 00:15:34.542 }, 00:15:34.542 { 00:15:34.543 "name": "BaseBdev2", 00:15:34.543 "uuid": "a71c67b5-89f8-46f6-8834-c3ad1c7a148c", 00:15:34.543 "is_configured": true, 00:15:34.543 "data_offset": 0, 00:15:34.543 "data_size": 65536 00:15:34.543 }, 00:15:34.543 { 00:15:34.543 "name": "BaseBdev3", 00:15:34.543 "uuid": "2aead6ee-79c7-4755-b9ce-abd760f6d41b", 00:15:34.543 "is_configured": true, 00:15:34.543 "data_offset": 0, 00:15:34.543 "data_size": 65536 00:15:34.543 } 00:15:34.543 ] 00:15:34.543 } 00:15:34.543 } 00:15:34.543 }' 00:15:34.543 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.543 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:34.543 BaseBdev2 00:15:34.543 BaseBdev3' 00:15:34.543 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.543 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:34.543 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.800 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.800 "name": "BaseBdev1", 00:15:34.800 "aliases": [ 00:15:34.800 "3dded469-fe7e-41a6-bc48-8c75764a277d" 00:15:34.800 ], 00:15:34.800 "product_name": "Malloc disk", 00:15:34.800 "block_size": 512, 00:15:34.800 "num_blocks": 65536, 00:15:34.800 "uuid": "3dded469-fe7e-41a6-bc48-8c75764a277d", 00:15:34.800 "assigned_rate_limits": { 00:15:34.801 "rw_ios_per_sec": 0, 00:15:34.801 "rw_mbytes_per_sec": 0, 00:15:34.801 "r_mbytes_per_sec": 0, 00:15:34.801 "w_mbytes_per_sec": 0 00:15:34.801 }, 00:15:34.801 "claimed": true, 00:15:34.801 "claim_type": "exclusive_write", 00:15:34.801 "zoned": false, 00:15:34.801 "supported_io_types": { 00:15:34.801 "read": true, 00:15:34.801 "write": true, 00:15:34.801 "unmap": true, 00:15:34.801 "flush": true, 00:15:34.801 "reset": true, 00:15:34.801 "nvme_admin": false, 00:15:34.801 "nvme_io": false, 00:15:34.801 "nvme_io_md": false, 00:15:34.801 "write_zeroes": true, 00:15:34.801 "zcopy": true, 00:15:34.801 "get_zone_info": false, 00:15:34.801 "zone_management": false, 00:15:34.801 "zone_append": false, 00:15:34.801 "compare": false, 00:15:34.801 "compare_and_write": false, 00:15:34.801 "abort": true, 00:15:34.801 "seek_hole": false, 00:15:34.801 "seek_data": false, 00:15:34.801 "copy": true, 00:15:34.801 "nvme_iov_md": false 00:15:34.801 }, 00:15:34.801 "memory_domains": [ 00:15:34.801 { 00:15:34.801 "dma_device_id": "system", 00:15:34.801 "dma_device_type": 1 00:15:34.801 }, 00:15:34.801 { 00:15:34.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.801 "dma_device_type": 2 00:15:34.801 } 00:15:34.801 ], 00:15:34.801 "driver_specific": {} 00:15:34.801 }' 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.801 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.058 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.058 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.058 11:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.058 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.058 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.058 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.058 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.058 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:35.316 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.316 "name": "BaseBdev2", 00:15:35.316 "aliases": [ 00:15:35.316 "a71c67b5-89f8-46f6-8834-c3ad1c7a148c" 00:15:35.316 ], 00:15:35.316 "product_name": "Malloc disk", 00:15:35.316 "block_size": 512, 00:15:35.316 "num_blocks": 65536, 00:15:35.316 "uuid": "a71c67b5-89f8-46f6-8834-c3ad1c7a148c", 00:15:35.316 "assigned_rate_limits": { 00:15:35.316 "rw_ios_per_sec": 0, 00:15:35.316 "rw_mbytes_per_sec": 0, 00:15:35.316 "r_mbytes_per_sec": 0, 00:15:35.316 "w_mbytes_per_sec": 0 00:15:35.316 }, 00:15:35.316 "claimed": true, 00:15:35.316 "claim_type": "exclusive_write", 00:15:35.316 "zoned": false, 00:15:35.316 "supported_io_types": { 00:15:35.316 "read": true, 00:15:35.316 "write": true, 00:15:35.316 "unmap": true, 00:15:35.316 "flush": true, 00:15:35.316 "reset": true, 00:15:35.316 "nvme_admin": false, 00:15:35.316 "nvme_io": false, 00:15:35.316 "nvme_io_md": false, 00:15:35.316 "write_zeroes": true, 00:15:35.316 "zcopy": true, 00:15:35.316 "get_zone_info": false, 00:15:35.316 "zone_management": false, 00:15:35.316 "zone_append": false, 00:15:35.316 "compare": false, 00:15:35.316 "compare_and_write": false, 00:15:35.316 "abort": true, 00:15:35.316 "seek_hole": false, 00:15:35.316 "seek_data": false, 00:15:35.316 "copy": true, 00:15:35.316 "nvme_iov_md": false 00:15:35.316 }, 00:15:35.316 "memory_domains": [ 00:15:35.316 { 00:15:35.316 "dma_device_id": "system", 00:15:35.316 "dma_device_type": 1 00:15:35.316 }, 00:15:35.316 { 00:15:35.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.316 "dma_device_type": 2 00:15:35.316 } 00:15:35.316 ], 00:15:35.316 "driver_specific": {} 00:15:35.316 }' 00:15:35.316 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.316 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.316 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.316 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.316 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:35.574 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.832 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.832 "name": "BaseBdev3", 00:15:35.832 "aliases": [ 00:15:35.832 "2aead6ee-79c7-4755-b9ce-abd760f6d41b" 00:15:35.832 ], 00:15:35.832 "product_name": "Malloc disk", 00:15:35.832 "block_size": 512, 00:15:35.832 "num_blocks": 65536, 00:15:35.832 "uuid": "2aead6ee-79c7-4755-b9ce-abd760f6d41b", 00:15:35.832 "assigned_rate_limits": { 00:15:35.832 "rw_ios_per_sec": 0, 00:15:35.832 "rw_mbytes_per_sec": 0, 00:15:35.832 "r_mbytes_per_sec": 0, 00:15:35.832 "w_mbytes_per_sec": 0 00:15:35.832 }, 00:15:35.832 "claimed": true, 00:15:35.832 "claim_type": "exclusive_write", 00:15:35.832 "zoned": false, 00:15:35.832 "supported_io_types": { 00:15:35.832 "read": true, 00:15:35.832 "write": true, 00:15:35.832 "unmap": true, 00:15:35.832 "flush": true, 00:15:35.832 "reset": true, 00:15:35.832 "nvme_admin": false, 00:15:35.832 "nvme_io": false, 00:15:35.832 "nvme_io_md": false, 00:15:35.832 "write_zeroes": true, 00:15:35.832 "zcopy": true, 00:15:35.832 "get_zone_info": false, 00:15:35.832 "zone_management": false, 00:15:35.832 "zone_append": false, 00:15:35.832 "compare": false, 00:15:35.832 "compare_and_write": false, 00:15:35.832 "abort": true, 00:15:35.832 "seek_hole": false, 00:15:35.832 "seek_data": false, 00:15:35.832 "copy": true, 00:15:35.832 "nvme_iov_md": false 00:15:35.832 }, 00:15:35.832 "memory_domains": [ 00:15:35.832 { 00:15:35.832 "dma_device_id": "system", 00:15:35.832 "dma_device_type": 1 00:15:35.832 }, 00:15:35.832 { 00:15:35.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.832 "dma_device_type": 2 00:15:35.832 } 00:15:35.832 ], 00:15:35.832 "driver_specific": {} 00:15:35.832 }' 00:15:35.832 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.832 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.832 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.832 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.088 11:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.088 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:36.346 [2024-07-25 11:57:22.398365] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.346 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.604 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.604 "name": "Existed_Raid", 00:15:36.604 "uuid": "73fdffc8-a53c-4dfa-8587-bbe275b4d525", 00:15:36.604 "strip_size_kb": 0, 00:15:36.604 "state": "online", 00:15:36.604 "raid_level": "raid1", 00:15:36.604 "superblock": false, 00:15:36.604 "num_base_bdevs": 3, 00:15:36.604 "num_base_bdevs_discovered": 2, 00:15:36.604 "num_base_bdevs_operational": 2, 00:15:36.604 "base_bdevs_list": [ 00:15:36.604 { 00:15:36.604 "name": null, 00:15:36.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.604 "is_configured": false, 00:15:36.604 "data_offset": 0, 00:15:36.604 "data_size": 65536 00:15:36.604 }, 00:15:36.604 { 00:15:36.604 "name": "BaseBdev2", 00:15:36.604 "uuid": "a71c67b5-89f8-46f6-8834-c3ad1c7a148c", 00:15:36.604 "is_configured": true, 00:15:36.604 "data_offset": 0, 00:15:36.604 "data_size": 65536 00:15:36.604 }, 00:15:36.604 { 00:15:36.604 "name": "BaseBdev3", 00:15:36.604 "uuid": "2aead6ee-79c7-4755-b9ce-abd760f6d41b", 00:15:36.604 "is_configured": true, 00:15:36.604 "data_offset": 0, 00:15:36.604 "data_size": 65536 00:15:36.604 } 00:15:36.604 ] 00:15:36.604 }' 00:15:36.604 11:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.604 11:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.172 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:37.172 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.172 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.172 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.467 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.467 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.467 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.467 [2024-07-25 11:57:23.566404] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.726 11:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:37.985 [2024-07-25 11:57:24.001438] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.986 [2024-07-25 11:57:24.001507] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.986 [2024-07-25 11:57:24.011615] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.986 [2024-07-25 11:57:24.011645] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.986 [2024-07-25 11:57:24.011656] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1653700 name Existed_Raid, state offline 00:15:37.986 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.986 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.986 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.986 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.245 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:38.245 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:38.245 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:38.245 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:38.245 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:38.245 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.504 BaseBdev2 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.504 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.073 11:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.332 [ 00:15:39.332 { 00:15:39.332 "name": "BaseBdev2", 00:15:39.332 "aliases": [ 00:15:39.332 "333cb5b5-0b82-49bf-b003-10fc9e19718b" 00:15:39.332 ], 00:15:39.332 "product_name": "Malloc disk", 00:15:39.332 "block_size": 512, 00:15:39.332 "num_blocks": 65536, 00:15:39.332 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:39.332 "assigned_rate_limits": { 00:15:39.332 "rw_ios_per_sec": 0, 00:15:39.332 "rw_mbytes_per_sec": 0, 00:15:39.332 "r_mbytes_per_sec": 0, 00:15:39.332 "w_mbytes_per_sec": 0 00:15:39.332 }, 00:15:39.332 "claimed": false, 00:15:39.332 "zoned": false, 00:15:39.332 "supported_io_types": { 00:15:39.332 "read": true, 00:15:39.332 "write": true, 00:15:39.332 "unmap": true, 00:15:39.332 "flush": true, 00:15:39.332 "reset": true, 00:15:39.332 "nvme_admin": false, 00:15:39.332 "nvme_io": false, 00:15:39.332 "nvme_io_md": false, 00:15:39.332 "write_zeroes": true, 00:15:39.332 "zcopy": true, 00:15:39.332 "get_zone_info": false, 00:15:39.332 "zone_management": false, 00:15:39.332 "zone_append": false, 00:15:39.332 "compare": false, 00:15:39.332 "compare_and_write": false, 00:15:39.332 "abort": true, 00:15:39.332 "seek_hole": false, 00:15:39.332 "seek_data": false, 00:15:39.332 "copy": true, 00:15:39.332 "nvme_iov_md": false 00:15:39.332 }, 00:15:39.332 "memory_domains": [ 00:15:39.332 { 00:15:39.332 "dma_device_id": "system", 00:15:39.332 "dma_device_type": 1 00:15:39.332 }, 00:15:39.332 { 00:15:39.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.332 "dma_device_type": 2 00:15:39.332 } 00:15:39.332 ], 00:15:39.332 "driver_specific": {} 00:15:39.332 } 00:15:39.332 ] 00:15:39.332 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:39.332 11:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:39.332 11:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:39.332 11:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.592 BaseBdev3 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.852 11:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.421 [ 00:15:40.421 { 00:15:40.421 "name": "BaseBdev3", 00:15:40.421 "aliases": [ 00:15:40.421 "9a8195f1-f0d5-48f9-b548-cf7159a95674" 00:15:40.421 ], 00:15:40.421 "product_name": "Malloc disk", 00:15:40.421 "block_size": 512, 00:15:40.421 "num_blocks": 65536, 00:15:40.421 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:40.421 "assigned_rate_limits": { 00:15:40.421 "rw_ios_per_sec": 0, 00:15:40.421 "rw_mbytes_per_sec": 0, 00:15:40.421 "r_mbytes_per_sec": 0, 00:15:40.421 "w_mbytes_per_sec": 0 00:15:40.421 }, 00:15:40.421 "claimed": false, 00:15:40.421 "zoned": false, 00:15:40.421 "supported_io_types": { 00:15:40.421 "read": true, 00:15:40.421 "write": true, 00:15:40.421 "unmap": true, 00:15:40.421 "flush": true, 00:15:40.421 "reset": true, 00:15:40.421 "nvme_admin": false, 00:15:40.421 "nvme_io": false, 00:15:40.421 "nvme_io_md": false, 00:15:40.421 "write_zeroes": true, 00:15:40.421 "zcopy": true, 00:15:40.421 "get_zone_info": false, 00:15:40.421 "zone_management": false, 00:15:40.421 "zone_append": false, 00:15:40.421 "compare": false, 00:15:40.421 "compare_and_write": false, 00:15:40.421 "abort": true, 00:15:40.421 "seek_hole": false, 00:15:40.421 "seek_data": false, 00:15:40.421 "copy": true, 00:15:40.421 "nvme_iov_md": false 00:15:40.421 }, 00:15:40.421 "memory_domains": [ 00:15:40.421 { 00:15:40.421 "dma_device_id": "system", 00:15:40.421 "dma_device_type": 1 00:15:40.421 }, 00:15:40.421 { 00:15:40.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.421 "dma_device_type": 2 00:15:40.421 } 00:15:40.421 ], 00:15:40.421 "driver_specific": {} 00:15:40.421 } 00:15:40.421 ] 00:15:40.421 11:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:40.421 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:40.421 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:40.421 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:40.679 [2024-07-25 11:57:26.670761] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.679 [2024-07-25 11:57:26.670805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.679 [2024-07-25 11:57:26.670824] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.679 [2024-07-25 11:57:26.672045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.679 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:40.679 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.679 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.680 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.938 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.938 "name": "Existed_Raid", 00:15:40.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.938 "strip_size_kb": 0, 00:15:40.938 "state": "configuring", 00:15:40.938 "raid_level": "raid1", 00:15:40.938 "superblock": false, 00:15:40.938 "num_base_bdevs": 3, 00:15:40.938 "num_base_bdevs_discovered": 2, 00:15:40.938 "num_base_bdevs_operational": 3, 00:15:40.938 "base_bdevs_list": [ 00:15:40.938 { 00:15:40.938 "name": "BaseBdev1", 00:15:40.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.938 "is_configured": false, 00:15:40.938 "data_offset": 0, 00:15:40.938 "data_size": 0 00:15:40.938 }, 00:15:40.938 { 00:15:40.938 "name": "BaseBdev2", 00:15:40.938 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:40.938 "is_configured": true, 00:15:40.938 "data_offset": 0, 00:15:40.938 "data_size": 65536 00:15:40.938 }, 00:15:40.938 { 00:15:40.938 "name": "BaseBdev3", 00:15:40.938 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:40.938 "is_configured": true, 00:15:40.938 "data_offset": 0, 00:15:40.938 "data_size": 65536 00:15:40.938 } 00:15:40.938 ] 00:15:40.938 }' 00:15:40.938 11:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.938 11:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.504 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:41.504 [2024-07-25 11:57:27.617258] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.763 "name": "Existed_Raid", 00:15:41.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.763 "strip_size_kb": 0, 00:15:41.763 "state": "configuring", 00:15:41.763 "raid_level": "raid1", 00:15:41.763 "superblock": false, 00:15:41.763 "num_base_bdevs": 3, 00:15:41.763 "num_base_bdevs_discovered": 1, 00:15:41.763 "num_base_bdevs_operational": 3, 00:15:41.763 "base_bdevs_list": [ 00:15:41.763 { 00:15:41.763 "name": "BaseBdev1", 00:15:41.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.763 "is_configured": false, 00:15:41.763 "data_offset": 0, 00:15:41.763 "data_size": 0 00:15:41.763 }, 00:15:41.763 { 00:15:41.763 "name": null, 00:15:41.763 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:41.763 "is_configured": false, 00:15:41.763 "data_offset": 0, 00:15:41.763 "data_size": 65536 00:15:41.763 }, 00:15:41.763 { 00:15:41.763 "name": "BaseBdev3", 00:15:41.763 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:41.763 "is_configured": true, 00:15:41.763 "data_offset": 0, 00:15:41.763 "data_size": 65536 00:15:41.763 } 00:15:41.763 ] 00:15:41.763 }' 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.763 11:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.330 11:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.330 11:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.588 11:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:42.588 11:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.846 [2024-07-25 11:57:28.715339] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.846 BaseBdev1 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:42.846 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.105 11:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.105 [ 00:15:43.105 { 00:15:43.105 "name": "BaseBdev1", 00:15:43.105 "aliases": [ 00:15:43.105 "2c503a88-0b02-448a-bcd9-bcf4c309aaa6" 00:15:43.105 ], 00:15:43.105 "product_name": "Malloc disk", 00:15:43.105 "block_size": 512, 00:15:43.105 "num_blocks": 65536, 00:15:43.105 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:43.105 "assigned_rate_limits": { 00:15:43.105 "rw_ios_per_sec": 0, 00:15:43.105 "rw_mbytes_per_sec": 0, 00:15:43.105 "r_mbytes_per_sec": 0, 00:15:43.105 "w_mbytes_per_sec": 0 00:15:43.105 }, 00:15:43.105 "claimed": true, 00:15:43.105 "claim_type": "exclusive_write", 00:15:43.105 "zoned": false, 00:15:43.105 "supported_io_types": { 00:15:43.105 "read": true, 00:15:43.105 "write": true, 00:15:43.105 "unmap": true, 00:15:43.105 "flush": true, 00:15:43.105 "reset": true, 00:15:43.105 "nvme_admin": false, 00:15:43.105 "nvme_io": false, 00:15:43.105 "nvme_io_md": false, 00:15:43.105 "write_zeroes": true, 00:15:43.105 "zcopy": true, 00:15:43.105 "get_zone_info": false, 00:15:43.105 "zone_management": false, 00:15:43.105 "zone_append": false, 00:15:43.105 "compare": false, 00:15:43.105 "compare_and_write": false, 00:15:43.105 "abort": true, 00:15:43.105 "seek_hole": false, 00:15:43.105 "seek_data": false, 00:15:43.105 "copy": true, 00:15:43.105 "nvme_iov_md": false 00:15:43.105 }, 00:15:43.105 "memory_domains": [ 00:15:43.105 { 00:15:43.105 "dma_device_id": "system", 00:15:43.105 "dma_device_type": 1 00:15:43.105 }, 00:15:43.105 { 00:15:43.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.105 "dma_device_type": 2 00:15:43.105 } 00:15:43.105 ], 00:15:43.105 "driver_specific": {} 00:15:43.105 } 00:15:43.105 ] 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.105 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.363 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.363 "name": "Existed_Raid", 00:15:43.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.364 "strip_size_kb": 0, 00:15:43.364 "state": "configuring", 00:15:43.364 "raid_level": "raid1", 00:15:43.364 "superblock": false, 00:15:43.364 "num_base_bdevs": 3, 00:15:43.364 "num_base_bdevs_discovered": 2, 00:15:43.364 "num_base_bdevs_operational": 3, 00:15:43.364 "base_bdevs_list": [ 00:15:43.364 { 00:15:43.364 "name": "BaseBdev1", 00:15:43.364 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:43.364 "is_configured": true, 00:15:43.364 "data_offset": 0, 00:15:43.364 "data_size": 65536 00:15:43.364 }, 00:15:43.364 { 00:15:43.364 "name": null, 00:15:43.364 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:43.364 "is_configured": false, 00:15:43.364 "data_offset": 0, 00:15:43.364 "data_size": 65536 00:15:43.364 }, 00:15:43.364 { 00:15:43.364 "name": "BaseBdev3", 00:15:43.364 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:43.364 "is_configured": true, 00:15:43.364 "data_offset": 0, 00:15:43.364 "data_size": 65536 00:15:43.364 } 00:15:43.364 ] 00:15:43.364 }' 00:15:43.364 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.364 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.931 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.931 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.189 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:44.189 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:44.447 [2024-07-25 11:57:30.395806] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.447 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.705 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.705 "name": "Existed_Raid", 00:15:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.705 "strip_size_kb": 0, 00:15:44.705 "state": "configuring", 00:15:44.705 "raid_level": "raid1", 00:15:44.705 "superblock": false, 00:15:44.705 "num_base_bdevs": 3, 00:15:44.705 "num_base_bdevs_discovered": 1, 00:15:44.705 "num_base_bdevs_operational": 3, 00:15:44.705 "base_bdevs_list": [ 00:15:44.705 { 00:15:44.705 "name": "BaseBdev1", 00:15:44.705 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:44.705 "is_configured": true, 00:15:44.705 "data_offset": 0, 00:15:44.705 "data_size": 65536 00:15:44.705 }, 00:15:44.705 { 00:15:44.705 "name": null, 00:15:44.705 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:44.705 "is_configured": false, 00:15:44.705 "data_offset": 0, 00:15:44.705 "data_size": 65536 00:15:44.705 }, 00:15:44.705 { 00:15:44.705 "name": null, 00:15:44.705 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:44.705 "is_configured": false, 00:15:44.705 "data_offset": 0, 00:15:44.705 "data_size": 65536 00:15:44.705 } 00:15:44.705 ] 00:15:44.705 }' 00:15:44.705 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.705 11:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.270 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.270 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:45.529 [2024-07-25 11:57:31.615045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.529 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.787 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.787 "name": "Existed_Raid", 00:15:45.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.787 "strip_size_kb": 0, 00:15:45.787 "state": "configuring", 00:15:45.787 "raid_level": "raid1", 00:15:45.787 "superblock": false, 00:15:45.787 "num_base_bdevs": 3, 00:15:45.787 "num_base_bdevs_discovered": 2, 00:15:45.787 "num_base_bdevs_operational": 3, 00:15:45.787 "base_bdevs_list": [ 00:15:45.787 { 00:15:45.787 "name": "BaseBdev1", 00:15:45.787 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:45.787 "is_configured": true, 00:15:45.787 "data_offset": 0, 00:15:45.787 "data_size": 65536 00:15:45.787 }, 00:15:45.787 { 00:15:45.787 "name": null, 00:15:45.787 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:45.787 "is_configured": false, 00:15:45.787 "data_offset": 0, 00:15:45.787 "data_size": 65536 00:15:45.787 }, 00:15:45.787 { 00:15:45.787 "name": "BaseBdev3", 00:15:45.787 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:45.787 "is_configured": true, 00:15:45.787 "data_offset": 0, 00:15:45.787 "data_size": 65536 00:15:45.787 } 00:15:45.787 ] 00:15:45.787 }' 00:15:45.787 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.787 11:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.353 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.353 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.612 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:46.612 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.870 [2024-07-25 11:57:32.802378] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.870 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.129 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.129 "name": "Existed_Raid", 00:15:47.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.129 "strip_size_kb": 0, 00:15:47.129 "state": "configuring", 00:15:47.129 "raid_level": "raid1", 00:15:47.129 "superblock": false, 00:15:47.129 "num_base_bdevs": 3, 00:15:47.129 "num_base_bdevs_discovered": 1, 00:15:47.129 "num_base_bdevs_operational": 3, 00:15:47.129 "base_bdevs_list": [ 00:15:47.129 { 00:15:47.129 "name": null, 00:15:47.129 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:47.129 "is_configured": false, 00:15:47.129 "data_offset": 0, 00:15:47.129 "data_size": 65536 00:15:47.129 }, 00:15:47.129 { 00:15:47.129 "name": null, 00:15:47.129 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:47.129 "is_configured": false, 00:15:47.129 "data_offset": 0, 00:15:47.129 "data_size": 65536 00:15:47.129 }, 00:15:47.129 { 00:15:47.129 "name": "BaseBdev3", 00:15:47.129 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:47.129 "is_configured": true, 00:15:47.129 "data_offset": 0, 00:15:47.129 "data_size": 65536 00:15:47.129 } 00:15:47.129 ] 00:15:47.129 }' 00:15:47.129 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.129 11:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.714 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.714 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.973 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:47.973 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:47.973 [2024-07-25 11:57:34.080106] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.231 "name": "Existed_Raid", 00:15:48.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.231 "strip_size_kb": 0, 00:15:48.231 "state": "configuring", 00:15:48.231 "raid_level": "raid1", 00:15:48.231 "superblock": false, 00:15:48.231 "num_base_bdevs": 3, 00:15:48.231 "num_base_bdevs_discovered": 2, 00:15:48.231 "num_base_bdevs_operational": 3, 00:15:48.231 "base_bdevs_list": [ 00:15:48.231 { 00:15:48.231 "name": null, 00:15:48.231 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:48.231 "is_configured": false, 00:15:48.231 "data_offset": 0, 00:15:48.231 "data_size": 65536 00:15:48.231 }, 00:15:48.231 { 00:15:48.231 "name": "BaseBdev2", 00:15:48.231 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:48.231 "is_configured": true, 00:15:48.231 "data_offset": 0, 00:15:48.231 "data_size": 65536 00:15:48.231 }, 00:15:48.231 { 00:15:48.231 "name": "BaseBdev3", 00:15:48.231 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:48.231 "is_configured": true, 00:15:48.231 "data_offset": 0, 00:15:48.231 "data_size": 65536 00:15:48.231 } 00:15:48.231 ] 00:15:48.231 }' 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.231 11:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.797 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:48.797 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.054 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:49.054 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.054 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:49.310 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2c503a88-0b02-448a-bcd9-bcf4c309aaa6 00:15:49.568 [2024-07-25 11:57:35.567069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:49.568 [2024-07-25 11:57:35.567103] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1654ea0 00:15:49.568 [2024-07-25 11:57:35.567111] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:49.568 [2024-07-25 11:57:35.567298] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17f9a10 00:15:49.568 [2024-07-25 11:57:35.567410] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1654ea0 00:15:49.568 [2024-07-25 11:57:35.567420] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1654ea0 00:15:49.568 [2024-07-25 11:57:35.567566] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.568 NewBaseBdev 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:49.568 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.827 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:50.118 [ 00:15:50.118 { 00:15:50.118 "name": "NewBaseBdev", 00:15:50.118 "aliases": [ 00:15:50.118 "2c503a88-0b02-448a-bcd9-bcf4c309aaa6" 00:15:50.118 ], 00:15:50.118 "product_name": "Malloc disk", 00:15:50.118 "block_size": 512, 00:15:50.118 "num_blocks": 65536, 00:15:50.118 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:50.118 "assigned_rate_limits": { 00:15:50.118 "rw_ios_per_sec": 0, 00:15:50.118 "rw_mbytes_per_sec": 0, 00:15:50.118 "r_mbytes_per_sec": 0, 00:15:50.118 "w_mbytes_per_sec": 0 00:15:50.118 }, 00:15:50.118 "claimed": true, 00:15:50.118 "claim_type": "exclusive_write", 00:15:50.118 "zoned": false, 00:15:50.118 "supported_io_types": { 00:15:50.118 "read": true, 00:15:50.118 "write": true, 00:15:50.118 "unmap": true, 00:15:50.118 "flush": true, 00:15:50.118 "reset": true, 00:15:50.118 "nvme_admin": false, 00:15:50.118 "nvme_io": false, 00:15:50.118 "nvme_io_md": false, 00:15:50.118 "write_zeroes": true, 00:15:50.118 "zcopy": true, 00:15:50.118 "get_zone_info": false, 00:15:50.118 "zone_management": false, 00:15:50.118 "zone_append": false, 00:15:50.118 "compare": false, 00:15:50.118 "compare_and_write": false, 00:15:50.118 "abort": true, 00:15:50.118 "seek_hole": false, 00:15:50.118 "seek_data": false, 00:15:50.118 "copy": true, 00:15:50.118 "nvme_iov_md": false 00:15:50.118 }, 00:15:50.118 "memory_domains": [ 00:15:50.118 { 00:15:50.118 "dma_device_id": "system", 00:15:50.118 "dma_device_type": 1 00:15:50.118 }, 00:15:50.118 { 00:15:50.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.118 "dma_device_type": 2 00:15:50.118 } 00:15:50.118 ], 00:15:50.118 "driver_specific": {} 00:15:50.118 } 00:15:50.118 ] 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.118 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.376 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.376 "name": "Existed_Raid", 00:15:50.376 "uuid": "96d0a19f-86d9-4622-b040-a80859ef1e75", 00:15:50.376 "strip_size_kb": 0, 00:15:50.376 "state": "online", 00:15:50.376 "raid_level": "raid1", 00:15:50.376 "superblock": false, 00:15:50.376 "num_base_bdevs": 3, 00:15:50.376 "num_base_bdevs_discovered": 3, 00:15:50.376 "num_base_bdevs_operational": 3, 00:15:50.376 "base_bdevs_list": [ 00:15:50.376 { 00:15:50.376 "name": "NewBaseBdev", 00:15:50.376 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:50.376 "is_configured": true, 00:15:50.376 "data_offset": 0, 00:15:50.376 "data_size": 65536 00:15:50.376 }, 00:15:50.376 { 00:15:50.376 "name": "BaseBdev2", 00:15:50.376 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:50.376 "is_configured": true, 00:15:50.376 "data_offset": 0, 00:15:50.376 "data_size": 65536 00:15:50.376 }, 00:15:50.376 { 00:15:50.376 "name": "BaseBdev3", 00:15:50.376 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:50.376 "is_configured": true, 00:15:50.376 "data_offset": 0, 00:15:50.376 "data_size": 65536 00:15:50.376 } 00:15:50.376 ] 00:15:50.376 }' 00:15:50.376 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.376 11:57:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.942 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.942 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:50.942 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:50.942 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:50.942 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:50.942 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:50.943 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:50.943 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:51.201 [2024-07-25 11:57:37.067308] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.201 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:51.201 "name": "Existed_Raid", 00:15:51.201 "aliases": [ 00:15:51.201 "96d0a19f-86d9-4622-b040-a80859ef1e75" 00:15:51.201 ], 00:15:51.201 "product_name": "Raid Volume", 00:15:51.201 "block_size": 512, 00:15:51.201 "num_blocks": 65536, 00:15:51.201 "uuid": "96d0a19f-86d9-4622-b040-a80859ef1e75", 00:15:51.201 "assigned_rate_limits": { 00:15:51.201 "rw_ios_per_sec": 0, 00:15:51.201 "rw_mbytes_per_sec": 0, 00:15:51.201 "r_mbytes_per_sec": 0, 00:15:51.201 "w_mbytes_per_sec": 0 00:15:51.201 }, 00:15:51.201 "claimed": false, 00:15:51.201 "zoned": false, 00:15:51.201 "supported_io_types": { 00:15:51.201 "read": true, 00:15:51.201 "write": true, 00:15:51.201 "unmap": false, 00:15:51.201 "flush": false, 00:15:51.201 "reset": true, 00:15:51.201 "nvme_admin": false, 00:15:51.201 "nvme_io": false, 00:15:51.201 "nvme_io_md": false, 00:15:51.201 "write_zeroes": true, 00:15:51.201 "zcopy": false, 00:15:51.201 "get_zone_info": false, 00:15:51.201 "zone_management": false, 00:15:51.201 "zone_append": false, 00:15:51.201 "compare": false, 00:15:51.201 "compare_and_write": false, 00:15:51.201 "abort": false, 00:15:51.201 "seek_hole": false, 00:15:51.201 "seek_data": false, 00:15:51.201 "copy": false, 00:15:51.201 "nvme_iov_md": false 00:15:51.201 }, 00:15:51.201 "memory_domains": [ 00:15:51.201 { 00:15:51.201 "dma_device_id": "system", 00:15:51.201 "dma_device_type": 1 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.201 "dma_device_type": 2 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "dma_device_id": "system", 00:15:51.201 "dma_device_type": 1 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.201 "dma_device_type": 2 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "dma_device_id": "system", 00:15:51.201 "dma_device_type": 1 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.201 "dma_device_type": 2 00:15:51.201 } 00:15:51.201 ], 00:15:51.201 "driver_specific": { 00:15:51.201 "raid": { 00:15:51.201 "uuid": "96d0a19f-86d9-4622-b040-a80859ef1e75", 00:15:51.201 "strip_size_kb": 0, 00:15:51.201 "state": "online", 00:15:51.201 "raid_level": "raid1", 00:15:51.201 "superblock": false, 00:15:51.201 "num_base_bdevs": 3, 00:15:51.201 "num_base_bdevs_discovered": 3, 00:15:51.201 "num_base_bdevs_operational": 3, 00:15:51.201 "base_bdevs_list": [ 00:15:51.201 { 00:15:51.201 "name": "NewBaseBdev", 00:15:51.201 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:51.201 "is_configured": true, 00:15:51.201 "data_offset": 0, 00:15:51.201 "data_size": 65536 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "name": "BaseBdev2", 00:15:51.201 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:51.201 "is_configured": true, 00:15:51.201 "data_offset": 0, 00:15:51.201 "data_size": 65536 00:15:51.201 }, 00:15:51.201 { 00:15:51.201 "name": "BaseBdev3", 00:15:51.201 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:51.201 "is_configured": true, 00:15:51.201 "data_offset": 0, 00:15:51.201 "data_size": 65536 00:15:51.201 } 00:15:51.201 ] 00:15:51.201 } 00:15:51.201 } 00:15:51.201 }' 00:15:51.201 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.201 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:51.201 BaseBdev2 00:15:51.201 BaseBdev3' 00:15:51.201 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.201 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:51.201 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.460 "name": "NewBaseBdev", 00:15:51.460 "aliases": [ 00:15:51.460 "2c503a88-0b02-448a-bcd9-bcf4c309aaa6" 00:15:51.460 ], 00:15:51.460 "product_name": "Malloc disk", 00:15:51.460 "block_size": 512, 00:15:51.460 "num_blocks": 65536, 00:15:51.460 "uuid": "2c503a88-0b02-448a-bcd9-bcf4c309aaa6", 00:15:51.460 "assigned_rate_limits": { 00:15:51.460 "rw_ios_per_sec": 0, 00:15:51.460 "rw_mbytes_per_sec": 0, 00:15:51.460 "r_mbytes_per_sec": 0, 00:15:51.460 "w_mbytes_per_sec": 0 00:15:51.460 }, 00:15:51.460 "claimed": true, 00:15:51.460 "claim_type": "exclusive_write", 00:15:51.460 "zoned": false, 00:15:51.460 "supported_io_types": { 00:15:51.460 "read": true, 00:15:51.460 "write": true, 00:15:51.460 "unmap": true, 00:15:51.460 "flush": true, 00:15:51.460 "reset": true, 00:15:51.460 "nvme_admin": false, 00:15:51.460 "nvme_io": false, 00:15:51.460 "nvme_io_md": false, 00:15:51.460 "write_zeroes": true, 00:15:51.460 "zcopy": true, 00:15:51.460 "get_zone_info": false, 00:15:51.460 "zone_management": false, 00:15:51.460 "zone_append": false, 00:15:51.460 "compare": false, 00:15:51.460 "compare_and_write": false, 00:15:51.460 "abort": true, 00:15:51.460 "seek_hole": false, 00:15:51.460 "seek_data": false, 00:15:51.460 "copy": true, 00:15:51.460 "nvme_iov_md": false 00:15:51.460 }, 00:15:51.460 "memory_domains": [ 00:15:51.460 { 00:15:51.460 "dma_device_id": "system", 00:15:51.460 "dma_device_type": 1 00:15:51.460 }, 00:15:51.460 { 00:15:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.460 "dma_device_type": 2 00:15:51.460 } 00:15:51.460 ], 00:15:51.460 "driver_specific": {} 00:15:51.460 }' 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.460 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.718 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:51.976 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.976 "name": "BaseBdev2", 00:15:51.976 "aliases": [ 00:15:51.976 "333cb5b5-0b82-49bf-b003-10fc9e19718b" 00:15:51.976 ], 00:15:51.976 "product_name": "Malloc disk", 00:15:51.976 "block_size": 512, 00:15:51.976 "num_blocks": 65536, 00:15:51.976 "uuid": "333cb5b5-0b82-49bf-b003-10fc9e19718b", 00:15:51.976 "assigned_rate_limits": { 00:15:51.976 "rw_ios_per_sec": 0, 00:15:51.976 "rw_mbytes_per_sec": 0, 00:15:51.976 "r_mbytes_per_sec": 0, 00:15:51.976 "w_mbytes_per_sec": 0 00:15:51.976 }, 00:15:51.976 "claimed": true, 00:15:51.976 "claim_type": "exclusive_write", 00:15:51.976 "zoned": false, 00:15:51.976 "supported_io_types": { 00:15:51.976 "read": true, 00:15:51.976 "write": true, 00:15:51.976 "unmap": true, 00:15:51.976 "flush": true, 00:15:51.976 "reset": true, 00:15:51.976 "nvme_admin": false, 00:15:51.976 "nvme_io": false, 00:15:51.976 "nvme_io_md": false, 00:15:51.976 "write_zeroes": true, 00:15:51.976 "zcopy": true, 00:15:51.976 "get_zone_info": false, 00:15:51.976 "zone_management": false, 00:15:51.976 "zone_append": false, 00:15:51.976 "compare": false, 00:15:51.976 "compare_and_write": false, 00:15:51.976 "abort": true, 00:15:51.976 "seek_hole": false, 00:15:51.976 "seek_data": false, 00:15:51.976 "copy": true, 00:15:51.976 "nvme_iov_md": false 00:15:51.976 }, 00:15:51.976 "memory_domains": [ 00:15:51.976 { 00:15:51.976 "dma_device_id": "system", 00:15:51.976 "dma_device_type": 1 00:15:51.976 }, 00:15:51.976 { 00:15:51.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.976 "dma_device_type": 2 00:15:51.976 } 00:15:51.976 ], 00:15:51.976 "driver_specific": {} 00:15:51.976 }' 00:15:51.976 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.976 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.976 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.976 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.976 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.976 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.976 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:52.235 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.493 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.493 "name": "BaseBdev3", 00:15:52.493 "aliases": [ 00:15:52.493 "9a8195f1-f0d5-48f9-b548-cf7159a95674" 00:15:52.493 ], 00:15:52.493 "product_name": "Malloc disk", 00:15:52.493 "block_size": 512, 00:15:52.493 "num_blocks": 65536, 00:15:52.493 "uuid": "9a8195f1-f0d5-48f9-b548-cf7159a95674", 00:15:52.493 "assigned_rate_limits": { 00:15:52.493 "rw_ios_per_sec": 0, 00:15:52.493 "rw_mbytes_per_sec": 0, 00:15:52.493 "r_mbytes_per_sec": 0, 00:15:52.493 "w_mbytes_per_sec": 0 00:15:52.493 }, 00:15:52.493 "claimed": true, 00:15:52.493 "claim_type": "exclusive_write", 00:15:52.493 "zoned": false, 00:15:52.493 "supported_io_types": { 00:15:52.493 "read": true, 00:15:52.493 "write": true, 00:15:52.493 "unmap": true, 00:15:52.493 "flush": true, 00:15:52.493 "reset": true, 00:15:52.493 "nvme_admin": false, 00:15:52.493 "nvme_io": false, 00:15:52.493 "nvme_io_md": false, 00:15:52.493 "write_zeroes": true, 00:15:52.493 "zcopy": true, 00:15:52.493 "get_zone_info": false, 00:15:52.494 "zone_management": false, 00:15:52.494 "zone_append": false, 00:15:52.494 "compare": false, 00:15:52.494 "compare_and_write": false, 00:15:52.494 "abort": true, 00:15:52.494 "seek_hole": false, 00:15:52.494 "seek_data": false, 00:15:52.494 "copy": true, 00:15:52.494 "nvme_iov_md": false 00:15:52.494 }, 00:15:52.494 "memory_domains": [ 00:15:52.494 { 00:15:52.494 "dma_device_id": "system", 00:15:52.494 "dma_device_type": 1 00:15:52.494 }, 00:15:52.494 { 00:15:52.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.494 "dma_device_type": 2 00:15:52.494 } 00:15:52.494 ], 00:15:52.494 "driver_specific": {} 00:15:52.494 }' 00:15:52.494 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.494 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.494 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.494 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.752 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:53.010 [2024-07-25 11:57:39.048271] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.010 [2024-07-25 11:57:39.048299] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.010 [2024-07-25 11:57:39.048363] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.011 [2024-07-25 11:57:39.048607] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.011 [2024-07-25 11:57:39.048618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1654ea0 name Existed_Raid, state offline 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4149146 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4149146 ']' 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4149146 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4149146 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4149146' 00:15:53.011 killing process with pid 4149146 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4149146 00:15:53.011 [2024-07-25 11:57:39.123873] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.011 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4149146 00:15:53.270 [2024-07-25 11:57:39.147936] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.270 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:53.270 00:15:53.270 real 0m26.806s 00:15:53.270 user 0m49.566s 00:15:53.270 sys 0m4.843s 00:15:53.270 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.270 11:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.270 ************************************ 00:15:53.270 END TEST raid_state_function_test 00:15:53.270 ************************************ 00:15:53.270 11:57:39 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:53.270 11:57:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:53.270 11:57:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.270 11:57:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.530 ************************************ 00:15:53.530 START TEST raid_state_function_test_sb 00:15:53.530 ************************************ 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4154238 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4154238' 00:15:53.530 Process raid pid: 4154238 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4154238 /var/tmp/spdk-raid.sock 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4154238 ']' 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.530 11:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.530 [2024-07-25 11:57:39.488787] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:53.530 [2024-07-25 11:57:39.488844] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:53.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.530 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:53.530 [2024-07-25 11:57:39.622767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.789 [2024-07-25 11:57:39.711464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.789 [2024-07-25 11:57:39.765300] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.789 [2024-07-25 11:57:39.765328] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.357 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.357 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:54.357 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:54.616 [2024-07-25 11:57:40.595181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.616 [2024-07-25 11:57:40.595218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.616 [2024-07-25 11:57:40.595228] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.616 [2024-07-25 11:57:40.595238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.616 [2024-07-25 11:57:40.595246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.616 [2024-07-25 11:57:40.595256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.616 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.875 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.875 "name": "Existed_Raid", 00:15:54.875 "uuid": "422cbe38-0458-4026-84ca-8a2cae95dec2", 00:15:54.875 "strip_size_kb": 0, 00:15:54.875 "state": "configuring", 00:15:54.875 "raid_level": "raid1", 00:15:54.875 "superblock": true, 00:15:54.875 "num_base_bdevs": 3, 00:15:54.875 "num_base_bdevs_discovered": 0, 00:15:54.875 "num_base_bdevs_operational": 3, 00:15:54.875 "base_bdevs_list": [ 00:15:54.875 { 00:15:54.875 "name": "BaseBdev1", 00:15:54.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.875 "is_configured": false, 00:15:54.875 "data_offset": 0, 00:15:54.875 "data_size": 0 00:15:54.875 }, 00:15:54.875 { 00:15:54.875 "name": "BaseBdev2", 00:15:54.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.875 "is_configured": false, 00:15:54.875 "data_offset": 0, 00:15:54.875 "data_size": 0 00:15:54.875 }, 00:15:54.875 { 00:15:54.875 "name": "BaseBdev3", 00:15:54.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.875 "is_configured": false, 00:15:54.875 "data_offset": 0, 00:15:54.875 "data_size": 0 00:15:54.875 } 00:15:54.875 ] 00:15:54.875 }' 00:15:54.875 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.875 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.442 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:55.442 [2024-07-25 11:57:41.553568] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.442 [2024-07-25 11:57:41.553602] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1658f40 name Existed_Raid, state configuring 00:15:55.701 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:55.701 [2024-07-25 11:57:41.782205] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.701 [2024-07-25 11:57:41.782237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.701 [2024-07-25 11:57:41.782246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.701 [2024-07-25 11:57:41.782257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.701 [2024-07-25 11:57:41.782265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.701 [2024-07-25 11:57:41.782274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.701 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.960 [2024-07-25 11:57:42.020405] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.960 BaseBdev1 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:55.960 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.219 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:56.478 [ 00:15:56.478 { 00:15:56.478 "name": "BaseBdev1", 00:15:56.478 "aliases": [ 00:15:56.478 "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4" 00:15:56.478 ], 00:15:56.478 "product_name": "Malloc disk", 00:15:56.478 "block_size": 512, 00:15:56.478 "num_blocks": 65536, 00:15:56.478 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:15:56.478 "assigned_rate_limits": { 00:15:56.478 "rw_ios_per_sec": 0, 00:15:56.478 "rw_mbytes_per_sec": 0, 00:15:56.478 "r_mbytes_per_sec": 0, 00:15:56.478 "w_mbytes_per_sec": 0 00:15:56.478 }, 00:15:56.478 "claimed": true, 00:15:56.478 "claim_type": "exclusive_write", 00:15:56.478 "zoned": false, 00:15:56.478 "supported_io_types": { 00:15:56.478 "read": true, 00:15:56.478 "write": true, 00:15:56.478 "unmap": true, 00:15:56.478 "flush": true, 00:15:56.478 "reset": true, 00:15:56.478 "nvme_admin": false, 00:15:56.478 "nvme_io": false, 00:15:56.478 "nvme_io_md": false, 00:15:56.478 "write_zeroes": true, 00:15:56.478 "zcopy": true, 00:15:56.478 "get_zone_info": false, 00:15:56.478 "zone_management": false, 00:15:56.478 "zone_append": false, 00:15:56.478 "compare": false, 00:15:56.478 "compare_and_write": false, 00:15:56.478 "abort": true, 00:15:56.478 "seek_hole": false, 00:15:56.478 "seek_data": false, 00:15:56.478 "copy": true, 00:15:56.478 "nvme_iov_md": false 00:15:56.478 }, 00:15:56.478 "memory_domains": [ 00:15:56.478 { 00:15:56.478 "dma_device_id": "system", 00:15:56.478 "dma_device_type": 1 00:15:56.478 }, 00:15:56.478 { 00:15:56.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.478 "dma_device_type": 2 00:15:56.478 } 00:15:56.479 ], 00:15:56.479 "driver_specific": {} 00:15:56.479 } 00:15:56.479 ] 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.479 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.738 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.738 "name": "Existed_Raid", 00:15:56.738 "uuid": "a0a5c891-b49d-4fce-b970-4cdfaa7d23fc", 00:15:56.738 "strip_size_kb": 0, 00:15:56.738 "state": "configuring", 00:15:56.738 "raid_level": "raid1", 00:15:56.738 "superblock": true, 00:15:56.738 "num_base_bdevs": 3, 00:15:56.738 "num_base_bdevs_discovered": 1, 00:15:56.738 "num_base_bdevs_operational": 3, 00:15:56.738 "base_bdevs_list": [ 00:15:56.738 { 00:15:56.738 "name": "BaseBdev1", 00:15:56.738 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:15:56.738 "is_configured": true, 00:15:56.738 "data_offset": 2048, 00:15:56.738 "data_size": 63488 00:15:56.738 }, 00:15:56.738 { 00:15:56.738 "name": "BaseBdev2", 00:15:56.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.738 "is_configured": false, 00:15:56.738 "data_offset": 0, 00:15:56.738 "data_size": 0 00:15:56.738 }, 00:15:56.738 { 00:15:56.738 "name": "BaseBdev3", 00:15:56.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.738 "is_configured": false, 00:15:56.738 "data_offset": 0, 00:15:56.738 "data_size": 0 00:15:56.738 } 00:15:56.738 ] 00:15:56.738 }' 00:15:56.738 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.738 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.305 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.564 [2024-07-25 11:57:43.496288] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.564 [2024-07-25 11:57:43.496322] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1658810 name Existed_Raid, state configuring 00:15:57.564 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:57.822 [2024-07-25 11:57:43.724924] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.822 [2024-07-25 11:57:43.726297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.822 [2024-07-25 11:57:43.726329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.822 [2024-07-25 11:57:43.726338] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.822 [2024-07-25 11:57:43.726348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.823 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.081 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.081 "name": "Existed_Raid", 00:15:58.081 "uuid": "e599834c-8acf-4d0c-affc-e354b59d7ffe", 00:15:58.081 "strip_size_kb": 0, 00:15:58.081 "state": "configuring", 00:15:58.081 "raid_level": "raid1", 00:15:58.081 "superblock": true, 00:15:58.081 "num_base_bdevs": 3, 00:15:58.082 "num_base_bdevs_discovered": 1, 00:15:58.082 "num_base_bdevs_operational": 3, 00:15:58.082 "base_bdevs_list": [ 00:15:58.082 { 00:15:58.082 "name": "BaseBdev1", 00:15:58.082 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:15:58.082 "is_configured": true, 00:15:58.082 "data_offset": 2048, 00:15:58.082 "data_size": 63488 00:15:58.082 }, 00:15:58.082 { 00:15:58.082 "name": "BaseBdev2", 00:15:58.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.082 "is_configured": false, 00:15:58.082 "data_offset": 0, 00:15:58.082 "data_size": 0 00:15:58.082 }, 00:15:58.082 { 00:15:58.082 "name": "BaseBdev3", 00:15:58.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.082 "is_configured": false, 00:15:58.082 "data_offset": 0, 00:15:58.082 "data_size": 0 00:15:58.082 } 00:15:58.082 ] 00:15:58.082 }' 00:15:58.082 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.082 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.650 11:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:58.650 [2024-07-25 11:57:44.766990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.650 BaseBdev2 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.909 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.909 11:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.168 [ 00:15:59.168 { 00:15:59.168 "name": "BaseBdev2", 00:15:59.168 "aliases": [ 00:15:59.168 "0266adc4-2886-4654-9758-0c546c6666f3" 00:15:59.168 ], 00:15:59.168 "product_name": "Malloc disk", 00:15:59.168 "block_size": 512, 00:15:59.168 "num_blocks": 65536, 00:15:59.168 "uuid": "0266adc4-2886-4654-9758-0c546c6666f3", 00:15:59.168 "assigned_rate_limits": { 00:15:59.168 "rw_ios_per_sec": 0, 00:15:59.168 "rw_mbytes_per_sec": 0, 00:15:59.168 "r_mbytes_per_sec": 0, 00:15:59.168 "w_mbytes_per_sec": 0 00:15:59.168 }, 00:15:59.168 "claimed": true, 00:15:59.168 "claim_type": "exclusive_write", 00:15:59.168 "zoned": false, 00:15:59.168 "supported_io_types": { 00:15:59.168 "read": true, 00:15:59.168 "write": true, 00:15:59.168 "unmap": true, 00:15:59.168 "flush": true, 00:15:59.168 "reset": true, 00:15:59.168 "nvme_admin": false, 00:15:59.168 "nvme_io": false, 00:15:59.168 "nvme_io_md": false, 00:15:59.168 "write_zeroes": true, 00:15:59.168 "zcopy": true, 00:15:59.168 "get_zone_info": false, 00:15:59.168 "zone_management": false, 00:15:59.168 "zone_append": false, 00:15:59.168 "compare": false, 00:15:59.168 "compare_and_write": false, 00:15:59.168 "abort": true, 00:15:59.168 "seek_hole": false, 00:15:59.168 "seek_data": false, 00:15:59.168 "copy": true, 00:15:59.168 "nvme_iov_md": false 00:15:59.168 }, 00:15:59.168 "memory_domains": [ 00:15:59.168 { 00:15:59.168 "dma_device_id": "system", 00:15:59.168 "dma_device_type": 1 00:15:59.168 }, 00:15:59.168 { 00:15:59.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.168 "dma_device_type": 2 00:15:59.168 } 00:15:59.168 ], 00:15:59.168 "driver_specific": {} 00:15:59.168 } 00:15:59.168 ] 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.168 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.427 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.427 "name": "Existed_Raid", 00:15:59.427 "uuid": "e599834c-8acf-4d0c-affc-e354b59d7ffe", 00:15:59.427 "strip_size_kb": 0, 00:15:59.427 "state": "configuring", 00:15:59.427 "raid_level": "raid1", 00:15:59.427 "superblock": true, 00:15:59.427 "num_base_bdevs": 3, 00:15:59.427 "num_base_bdevs_discovered": 2, 00:15:59.427 "num_base_bdevs_operational": 3, 00:15:59.427 "base_bdevs_list": [ 00:15:59.427 { 00:15:59.427 "name": "BaseBdev1", 00:15:59.427 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:15:59.427 "is_configured": true, 00:15:59.427 "data_offset": 2048, 00:15:59.427 "data_size": 63488 00:15:59.427 }, 00:15:59.427 { 00:15:59.427 "name": "BaseBdev2", 00:15:59.427 "uuid": "0266adc4-2886-4654-9758-0c546c6666f3", 00:15:59.427 "is_configured": true, 00:15:59.427 "data_offset": 2048, 00:15:59.427 "data_size": 63488 00:15:59.427 }, 00:15:59.427 { 00:15:59.427 "name": "BaseBdev3", 00:15:59.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.427 "is_configured": false, 00:15:59.427 "data_offset": 0, 00:15:59.427 "data_size": 0 00:15:59.427 } 00:15:59.427 ] 00:15:59.427 }' 00:15:59.427 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.427 11:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.995 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:00.255 [2024-07-25 11:57:46.238103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.255 [2024-07-25 11:57:46.238268] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1659700 00:16:00.255 [2024-07-25 11:57:46.238282] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.255 [2024-07-25 11:57:46.238447] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16593d0 00:16:00.255 [2024-07-25 11:57:46.238567] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1659700 00:16:00.255 [2024-07-25 11:57:46.238576] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1659700 00:16:00.255 [2024-07-25 11:57:46.238675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.255 BaseBdev3 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.255 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.514 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:00.773 [ 00:16:00.773 { 00:16:00.773 "name": "BaseBdev3", 00:16:00.773 "aliases": [ 00:16:00.773 "981e7dd5-17db-4bcc-84a0-969ba1d7353a" 00:16:00.773 ], 00:16:00.773 "product_name": "Malloc disk", 00:16:00.773 "block_size": 512, 00:16:00.773 "num_blocks": 65536, 00:16:00.773 "uuid": "981e7dd5-17db-4bcc-84a0-969ba1d7353a", 00:16:00.773 "assigned_rate_limits": { 00:16:00.773 "rw_ios_per_sec": 0, 00:16:00.773 "rw_mbytes_per_sec": 0, 00:16:00.773 "r_mbytes_per_sec": 0, 00:16:00.773 "w_mbytes_per_sec": 0 00:16:00.773 }, 00:16:00.773 "claimed": true, 00:16:00.773 "claim_type": "exclusive_write", 00:16:00.773 "zoned": false, 00:16:00.773 "supported_io_types": { 00:16:00.773 "read": true, 00:16:00.773 "write": true, 00:16:00.773 "unmap": true, 00:16:00.773 "flush": true, 00:16:00.773 "reset": true, 00:16:00.773 "nvme_admin": false, 00:16:00.773 "nvme_io": false, 00:16:00.773 "nvme_io_md": false, 00:16:00.773 "write_zeroes": true, 00:16:00.773 "zcopy": true, 00:16:00.773 "get_zone_info": false, 00:16:00.773 "zone_management": false, 00:16:00.773 "zone_append": false, 00:16:00.773 "compare": false, 00:16:00.773 "compare_and_write": false, 00:16:00.773 "abort": true, 00:16:00.774 "seek_hole": false, 00:16:00.774 "seek_data": false, 00:16:00.774 "copy": true, 00:16:00.774 "nvme_iov_md": false 00:16:00.774 }, 00:16:00.774 "memory_domains": [ 00:16:00.774 { 00:16:00.774 "dma_device_id": "system", 00:16:00.774 "dma_device_type": 1 00:16:00.774 }, 00:16:00.774 { 00:16:00.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.774 "dma_device_type": 2 00:16:00.774 } 00:16:00.774 ], 00:16:00.774 "driver_specific": {} 00:16:00.774 } 00:16:00.774 ] 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.774 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.033 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.033 "name": "Existed_Raid", 00:16:01.033 "uuid": "e599834c-8acf-4d0c-affc-e354b59d7ffe", 00:16:01.033 "strip_size_kb": 0, 00:16:01.033 "state": "online", 00:16:01.033 "raid_level": "raid1", 00:16:01.033 "superblock": true, 00:16:01.033 "num_base_bdevs": 3, 00:16:01.033 "num_base_bdevs_discovered": 3, 00:16:01.033 "num_base_bdevs_operational": 3, 00:16:01.033 "base_bdevs_list": [ 00:16:01.033 { 00:16:01.033 "name": "BaseBdev1", 00:16:01.033 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:16:01.033 "is_configured": true, 00:16:01.033 "data_offset": 2048, 00:16:01.033 "data_size": 63488 00:16:01.033 }, 00:16:01.033 { 00:16:01.033 "name": "BaseBdev2", 00:16:01.033 "uuid": "0266adc4-2886-4654-9758-0c546c6666f3", 00:16:01.033 "is_configured": true, 00:16:01.033 "data_offset": 2048, 00:16:01.033 "data_size": 63488 00:16:01.033 }, 00:16:01.033 { 00:16:01.033 "name": "BaseBdev3", 00:16:01.033 "uuid": "981e7dd5-17db-4bcc-84a0-969ba1d7353a", 00:16:01.033 "is_configured": true, 00:16:01.033 "data_offset": 2048, 00:16:01.033 "data_size": 63488 00:16:01.033 } 00:16:01.033 ] 00:16:01.033 }' 00:16:01.033 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.033 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:01.600 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:01.600 [2024-07-25 11:57:47.718303] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.858 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:01.858 "name": "Existed_Raid", 00:16:01.858 "aliases": [ 00:16:01.858 "e599834c-8acf-4d0c-affc-e354b59d7ffe" 00:16:01.858 ], 00:16:01.858 "product_name": "Raid Volume", 00:16:01.858 "block_size": 512, 00:16:01.858 "num_blocks": 63488, 00:16:01.858 "uuid": "e599834c-8acf-4d0c-affc-e354b59d7ffe", 00:16:01.858 "assigned_rate_limits": { 00:16:01.858 "rw_ios_per_sec": 0, 00:16:01.858 "rw_mbytes_per_sec": 0, 00:16:01.858 "r_mbytes_per_sec": 0, 00:16:01.858 "w_mbytes_per_sec": 0 00:16:01.858 }, 00:16:01.858 "claimed": false, 00:16:01.858 "zoned": false, 00:16:01.858 "supported_io_types": { 00:16:01.858 "read": true, 00:16:01.858 "write": true, 00:16:01.858 "unmap": false, 00:16:01.858 "flush": false, 00:16:01.858 "reset": true, 00:16:01.858 "nvme_admin": false, 00:16:01.858 "nvme_io": false, 00:16:01.858 "nvme_io_md": false, 00:16:01.858 "write_zeroes": true, 00:16:01.858 "zcopy": false, 00:16:01.858 "get_zone_info": false, 00:16:01.858 "zone_management": false, 00:16:01.858 "zone_append": false, 00:16:01.858 "compare": false, 00:16:01.858 "compare_and_write": false, 00:16:01.858 "abort": false, 00:16:01.858 "seek_hole": false, 00:16:01.858 "seek_data": false, 00:16:01.858 "copy": false, 00:16:01.858 "nvme_iov_md": false 00:16:01.858 }, 00:16:01.858 "memory_domains": [ 00:16:01.858 { 00:16:01.858 "dma_device_id": "system", 00:16:01.858 "dma_device_type": 1 00:16:01.858 }, 00:16:01.858 { 00:16:01.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.858 "dma_device_type": 2 00:16:01.858 }, 00:16:01.858 { 00:16:01.858 "dma_device_id": "system", 00:16:01.858 "dma_device_type": 1 00:16:01.858 }, 00:16:01.858 { 00:16:01.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.858 "dma_device_type": 2 00:16:01.858 }, 00:16:01.858 { 00:16:01.858 "dma_device_id": "system", 00:16:01.858 "dma_device_type": 1 00:16:01.858 }, 00:16:01.858 { 00:16:01.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.858 "dma_device_type": 2 00:16:01.858 } 00:16:01.858 ], 00:16:01.858 "driver_specific": { 00:16:01.858 "raid": { 00:16:01.858 "uuid": "e599834c-8acf-4d0c-affc-e354b59d7ffe", 00:16:01.858 "strip_size_kb": 0, 00:16:01.858 "state": "online", 00:16:01.858 "raid_level": "raid1", 00:16:01.858 "superblock": true, 00:16:01.858 "num_base_bdevs": 3, 00:16:01.858 "num_base_bdevs_discovered": 3, 00:16:01.858 "num_base_bdevs_operational": 3, 00:16:01.858 "base_bdevs_list": [ 00:16:01.858 { 00:16:01.859 "name": "BaseBdev1", 00:16:01.859 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:16:01.859 "is_configured": true, 00:16:01.859 "data_offset": 2048, 00:16:01.859 "data_size": 63488 00:16:01.859 }, 00:16:01.859 { 00:16:01.859 "name": "BaseBdev2", 00:16:01.859 "uuid": "0266adc4-2886-4654-9758-0c546c6666f3", 00:16:01.859 "is_configured": true, 00:16:01.859 "data_offset": 2048, 00:16:01.859 "data_size": 63488 00:16:01.859 }, 00:16:01.859 { 00:16:01.859 "name": "BaseBdev3", 00:16:01.859 "uuid": "981e7dd5-17db-4bcc-84a0-969ba1d7353a", 00:16:01.859 "is_configured": true, 00:16:01.859 "data_offset": 2048, 00:16:01.859 "data_size": 63488 00:16:01.859 } 00:16:01.859 ] 00:16:01.859 } 00:16:01.859 } 00:16:01.859 }' 00:16:01.859 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.859 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:01.859 BaseBdev2 00:16:01.859 BaseBdev3' 00:16:01.859 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:01.859 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:01.859 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.117 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.117 "name": "BaseBdev1", 00:16:02.117 "aliases": [ 00:16:02.117 "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4" 00:16:02.117 ], 00:16:02.117 "product_name": "Malloc disk", 00:16:02.117 "block_size": 512, 00:16:02.117 "num_blocks": 65536, 00:16:02.117 "uuid": "8ed9a925-734c-4d21-8b4e-d1cc89fc63b4", 00:16:02.117 "assigned_rate_limits": { 00:16:02.117 "rw_ios_per_sec": 0, 00:16:02.117 "rw_mbytes_per_sec": 0, 00:16:02.117 "r_mbytes_per_sec": 0, 00:16:02.117 "w_mbytes_per_sec": 0 00:16:02.117 }, 00:16:02.117 "claimed": true, 00:16:02.117 "claim_type": "exclusive_write", 00:16:02.117 "zoned": false, 00:16:02.117 "supported_io_types": { 00:16:02.117 "read": true, 00:16:02.117 "write": true, 00:16:02.117 "unmap": true, 00:16:02.117 "flush": true, 00:16:02.117 "reset": true, 00:16:02.117 "nvme_admin": false, 00:16:02.117 "nvme_io": false, 00:16:02.117 "nvme_io_md": false, 00:16:02.117 "write_zeroes": true, 00:16:02.117 "zcopy": true, 00:16:02.117 "get_zone_info": false, 00:16:02.117 "zone_management": false, 00:16:02.117 "zone_append": false, 00:16:02.117 "compare": false, 00:16:02.117 "compare_and_write": false, 00:16:02.117 "abort": true, 00:16:02.117 "seek_hole": false, 00:16:02.117 "seek_data": false, 00:16:02.117 "copy": true, 00:16:02.117 "nvme_iov_md": false 00:16:02.117 }, 00:16:02.117 "memory_domains": [ 00:16:02.117 { 00:16:02.117 "dma_device_id": "system", 00:16:02.117 "dma_device_type": 1 00:16:02.117 }, 00:16:02.117 { 00:16:02.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.117 "dma_device_type": 2 00:16:02.117 } 00:16:02.117 ], 00:16:02.117 "driver_specific": {} 00:16:02.117 }' 00:16:02.117 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.118 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:02.376 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.652 "name": "BaseBdev2", 00:16:02.652 "aliases": [ 00:16:02.652 "0266adc4-2886-4654-9758-0c546c6666f3" 00:16:02.652 ], 00:16:02.652 "product_name": "Malloc disk", 00:16:02.652 "block_size": 512, 00:16:02.652 "num_blocks": 65536, 00:16:02.652 "uuid": "0266adc4-2886-4654-9758-0c546c6666f3", 00:16:02.652 "assigned_rate_limits": { 00:16:02.652 "rw_ios_per_sec": 0, 00:16:02.652 "rw_mbytes_per_sec": 0, 00:16:02.652 "r_mbytes_per_sec": 0, 00:16:02.652 "w_mbytes_per_sec": 0 00:16:02.652 }, 00:16:02.652 "claimed": true, 00:16:02.652 "claim_type": "exclusive_write", 00:16:02.652 "zoned": false, 00:16:02.652 "supported_io_types": { 00:16:02.652 "read": true, 00:16:02.652 "write": true, 00:16:02.652 "unmap": true, 00:16:02.652 "flush": true, 00:16:02.652 "reset": true, 00:16:02.652 "nvme_admin": false, 00:16:02.652 "nvme_io": false, 00:16:02.652 "nvme_io_md": false, 00:16:02.652 "write_zeroes": true, 00:16:02.652 "zcopy": true, 00:16:02.652 "get_zone_info": false, 00:16:02.652 "zone_management": false, 00:16:02.652 "zone_append": false, 00:16:02.652 "compare": false, 00:16:02.652 "compare_and_write": false, 00:16:02.652 "abort": true, 00:16:02.652 "seek_hole": false, 00:16:02.652 "seek_data": false, 00:16:02.652 "copy": true, 00:16:02.652 "nvme_iov_md": false 00:16:02.652 }, 00:16:02.652 "memory_domains": [ 00:16:02.652 { 00:16:02.652 "dma_device_id": "system", 00:16:02.652 "dma_device_type": 1 00:16:02.652 }, 00:16:02.652 { 00:16:02.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.652 "dma_device_type": 2 00:16:02.652 } 00:16:02.652 ], 00:16:02.652 "driver_specific": {} 00:16:02.652 }' 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.652 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.920 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:02.921 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:03.179 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:03.179 "name": "BaseBdev3", 00:16:03.179 "aliases": [ 00:16:03.179 "981e7dd5-17db-4bcc-84a0-969ba1d7353a" 00:16:03.179 ], 00:16:03.179 "product_name": "Malloc disk", 00:16:03.179 "block_size": 512, 00:16:03.179 "num_blocks": 65536, 00:16:03.179 "uuid": "981e7dd5-17db-4bcc-84a0-969ba1d7353a", 00:16:03.179 "assigned_rate_limits": { 00:16:03.179 "rw_ios_per_sec": 0, 00:16:03.179 "rw_mbytes_per_sec": 0, 00:16:03.179 "r_mbytes_per_sec": 0, 00:16:03.179 "w_mbytes_per_sec": 0 00:16:03.179 }, 00:16:03.179 "claimed": true, 00:16:03.179 "claim_type": "exclusive_write", 00:16:03.179 "zoned": false, 00:16:03.179 "supported_io_types": { 00:16:03.179 "read": true, 00:16:03.179 "write": true, 00:16:03.179 "unmap": true, 00:16:03.179 "flush": true, 00:16:03.179 "reset": true, 00:16:03.179 "nvme_admin": false, 00:16:03.179 "nvme_io": false, 00:16:03.179 "nvme_io_md": false, 00:16:03.179 "write_zeroes": true, 00:16:03.179 "zcopy": true, 00:16:03.179 "get_zone_info": false, 00:16:03.179 "zone_management": false, 00:16:03.179 "zone_append": false, 00:16:03.179 "compare": false, 00:16:03.179 "compare_and_write": false, 00:16:03.179 "abort": true, 00:16:03.179 "seek_hole": false, 00:16:03.179 "seek_data": false, 00:16:03.179 "copy": true, 00:16:03.179 "nvme_iov_md": false 00:16:03.179 }, 00:16:03.179 "memory_domains": [ 00:16:03.179 { 00:16:03.179 "dma_device_id": "system", 00:16:03.179 "dma_device_type": 1 00:16:03.179 }, 00:16:03.179 { 00:16:03.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.179 "dma_device_type": 2 00:16:03.179 } 00:16:03.179 ], 00:16:03.179 "driver_specific": {} 00:16:03.179 }' 00:16:03.179 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.179 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.179 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.179 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.179 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.437 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.437 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.437 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.437 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.437 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.437 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.438 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.438 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.696 [2024-07-25 11:57:49.711310] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.696 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.955 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.955 "name": "Existed_Raid", 00:16:03.955 "uuid": "e599834c-8acf-4d0c-affc-e354b59d7ffe", 00:16:03.955 "strip_size_kb": 0, 00:16:03.955 "state": "online", 00:16:03.955 "raid_level": "raid1", 00:16:03.955 "superblock": true, 00:16:03.955 "num_base_bdevs": 3, 00:16:03.955 "num_base_bdevs_discovered": 2, 00:16:03.955 "num_base_bdevs_operational": 2, 00:16:03.955 "base_bdevs_list": [ 00:16:03.955 { 00:16:03.955 "name": null, 00:16:03.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.955 "is_configured": false, 00:16:03.955 "data_offset": 2048, 00:16:03.955 "data_size": 63488 00:16:03.955 }, 00:16:03.955 { 00:16:03.955 "name": "BaseBdev2", 00:16:03.955 "uuid": "0266adc4-2886-4654-9758-0c546c6666f3", 00:16:03.955 "is_configured": true, 00:16:03.955 "data_offset": 2048, 00:16:03.955 "data_size": 63488 00:16:03.955 }, 00:16:03.955 { 00:16:03.955 "name": "BaseBdev3", 00:16:03.955 "uuid": "981e7dd5-17db-4bcc-84a0-969ba1d7353a", 00:16:03.955 "is_configured": true, 00:16:03.955 "data_offset": 2048, 00:16:03.955 "data_size": 63488 00:16:03.955 } 00:16:03.955 ] 00:16:03.955 }' 00:16:03.955 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.955 11:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.521 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:04.521 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.521 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.521 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:04.780 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:04.780 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.780 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:05.038 [2024-07-25 11:57:50.919469] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.038 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:05.038 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:05.038 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.038 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:05.297 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:05.297 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.297 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:05.297 [2024-07-25 11:57:51.382630] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:05.297 [2024-07-25 11:57:51.382707] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.297 [2024-07-25 11:57:51.392866] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.297 [2024-07-25 11:57:51.392897] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.297 [2024-07-25 11:57:51.392908] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1659700 name Existed_Raid, state offline 00:16:05.297 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:05.297 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:05.297 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:05.555 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.814 BaseBdev2 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:05.814 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.072 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.331 [ 00:16:06.331 { 00:16:06.331 "name": "BaseBdev2", 00:16:06.331 "aliases": [ 00:16:06.331 "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc" 00:16:06.331 ], 00:16:06.331 "product_name": "Malloc disk", 00:16:06.331 "block_size": 512, 00:16:06.331 "num_blocks": 65536, 00:16:06.331 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:06.331 "assigned_rate_limits": { 00:16:06.331 "rw_ios_per_sec": 0, 00:16:06.331 "rw_mbytes_per_sec": 0, 00:16:06.331 "r_mbytes_per_sec": 0, 00:16:06.331 "w_mbytes_per_sec": 0 00:16:06.331 }, 00:16:06.331 "claimed": false, 00:16:06.331 "zoned": false, 00:16:06.331 "supported_io_types": { 00:16:06.331 "read": true, 00:16:06.331 "write": true, 00:16:06.331 "unmap": true, 00:16:06.331 "flush": true, 00:16:06.331 "reset": true, 00:16:06.331 "nvme_admin": false, 00:16:06.331 "nvme_io": false, 00:16:06.331 "nvme_io_md": false, 00:16:06.331 "write_zeroes": true, 00:16:06.331 "zcopy": true, 00:16:06.331 "get_zone_info": false, 00:16:06.331 "zone_management": false, 00:16:06.331 "zone_append": false, 00:16:06.331 "compare": false, 00:16:06.331 "compare_and_write": false, 00:16:06.331 "abort": true, 00:16:06.331 "seek_hole": false, 00:16:06.331 "seek_data": false, 00:16:06.331 "copy": true, 00:16:06.331 "nvme_iov_md": false 00:16:06.331 }, 00:16:06.331 "memory_domains": [ 00:16:06.331 { 00:16:06.331 "dma_device_id": "system", 00:16:06.331 "dma_device_type": 1 00:16:06.331 }, 00:16:06.331 { 00:16:06.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.331 "dma_device_type": 2 00:16:06.331 } 00:16:06.331 ], 00:16:06.331 "driver_specific": {} 00:16:06.331 } 00:16:06.331 ] 00:16:06.331 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:06.331 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:06.331 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:06.331 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.589 BaseBdev3 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:06.589 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.847 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.105 [ 00:16:07.105 { 00:16:07.105 "name": "BaseBdev3", 00:16:07.105 "aliases": [ 00:16:07.105 "df8f98ba-5a69-4ad2-aadf-b008a259c0ca" 00:16:07.106 ], 00:16:07.106 "product_name": "Malloc disk", 00:16:07.106 "block_size": 512, 00:16:07.106 "num_blocks": 65536, 00:16:07.106 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:07.106 "assigned_rate_limits": { 00:16:07.106 "rw_ios_per_sec": 0, 00:16:07.106 "rw_mbytes_per_sec": 0, 00:16:07.106 "r_mbytes_per_sec": 0, 00:16:07.106 "w_mbytes_per_sec": 0 00:16:07.106 }, 00:16:07.106 "claimed": false, 00:16:07.106 "zoned": false, 00:16:07.106 "supported_io_types": { 00:16:07.106 "read": true, 00:16:07.106 "write": true, 00:16:07.106 "unmap": true, 00:16:07.106 "flush": true, 00:16:07.106 "reset": true, 00:16:07.106 "nvme_admin": false, 00:16:07.106 "nvme_io": false, 00:16:07.106 "nvme_io_md": false, 00:16:07.106 "write_zeroes": true, 00:16:07.106 "zcopy": true, 00:16:07.106 "get_zone_info": false, 00:16:07.106 "zone_management": false, 00:16:07.106 "zone_append": false, 00:16:07.106 "compare": false, 00:16:07.106 "compare_and_write": false, 00:16:07.106 "abort": true, 00:16:07.106 "seek_hole": false, 00:16:07.106 "seek_data": false, 00:16:07.106 "copy": true, 00:16:07.106 "nvme_iov_md": false 00:16:07.106 }, 00:16:07.106 "memory_domains": [ 00:16:07.106 { 00:16:07.106 "dma_device_id": "system", 00:16:07.106 "dma_device_type": 1 00:16:07.106 }, 00:16:07.106 { 00:16:07.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.106 "dma_device_type": 2 00:16:07.106 } 00:16:07.106 ], 00:16:07.106 "driver_specific": {} 00:16:07.106 } 00:16:07.106 ] 00:16:07.106 11:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:07.106 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:07.106 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:07.106 11:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:07.106 [2024-07-25 11:57:53.197872] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.106 [2024-07-25 11:57:53.197911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.106 [2024-07-25 11:57:53.197927] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.106 [2024-07-25 11:57:53.199176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.106 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.365 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.365 "name": "Existed_Raid", 00:16:07.365 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:07.365 "strip_size_kb": 0, 00:16:07.365 "state": "configuring", 00:16:07.365 "raid_level": "raid1", 00:16:07.365 "superblock": true, 00:16:07.365 "num_base_bdevs": 3, 00:16:07.365 "num_base_bdevs_discovered": 2, 00:16:07.365 "num_base_bdevs_operational": 3, 00:16:07.365 "base_bdevs_list": [ 00:16:07.365 { 00:16:07.365 "name": "BaseBdev1", 00:16:07.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.365 "is_configured": false, 00:16:07.365 "data_offset": 0, 00:16:07.365 "data_size": 0 00:16:07.365 }, 00:16:07.365 { 00:16:07.366 "name": "BaseBdev2", 00:16:07.366 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:07.366 "is_configured": true, 00:16:07.366 "data_offset": 2048, 00:16:07.366 "data_size": 63488 00:16:07.366 }, 00:16:07.366 { 00:16:07.366 "name": "BaseBdev3", 00:16:07.366 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:07.366 "is_configured": true, 00:16:07.366 "data_offset": 2048, 00:16:07.366 "data_size": 63488 00:16:07.366 } 00:16:07.366 ] 00:16:07.366 }' 00:16:07.366 11:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.366 11:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.931 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:08.190 [2024-07-25 11:57:54.220545] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.190 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.499 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.499 "name": "Existed_Raid", 00:16:08.499 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:08.499 "strip_size_kb": 0, 00:16:08.499 "state": "configuring", 00:16:08.499 "raid_level": "raid1", 00:16:08.499 "superblock": true, 00:16:08.499 "num_base_bdevs": 3, 00:16:08.499 "num_base_bdevs_discovered": 1, 00:16:08.499 "num_base_bdevs_operational": 3, 00:16:08.499 "base_bdevs_list": [ 00:16:08.499 { 00:16:08.499 "name": "BaseBdev1", 00:16:08.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.499 "is_configured": false, 00:16:08.499 "data_offset": 0, 00:16:08.499 "data_size": 0 00:16:08.499 }, 00:16:08.499 { 00:16:08.499 "name": null, 00:16:08.499 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:08.499 "is_configured": false, 00:16:08.499 "data_offset": 2048, 00:16:08.499 "data_size": 63488 00:16:08.499 }, 00:16:08.499 { 00:16:08.499 "name": "BaseBdev3", 00:16:08.499 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:08.499 "is_configured": true, 00:16:08.499 "data_offset": 2048, 00:16:08.499 "data_size": 63488 00:16:08.499 } 00:16:08.499 ] 00:16:08.499 }' 00:16:08.499 11:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.499 11:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.065 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.065 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.323 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:09.323 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.582 [2024-07-25 11:57:55.470949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.582 BaseBdev1 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.582 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.840 [ 00:16:09.840 { 00:16:09.840 "name": "BaseBdev1", 00:16:09.840 "aliases": [ 00:16:09.840 "97d61be9-d177-45e1-82fa-e3f82f5087a5" 00:16:09.840 ], 00:16:09.840 "product_name": "Malloc disk", 00:16:09.840 "block_size": 512, 00:16:09.840 "num_blocks": 65536, 00:16:09.840 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:09.840 "assigned_rate_limits": { 00:16:09.840 "rw_ios_per_sec": 0, 00:16:09.840 "rw_mbytes_per_sec": 0, 00:16:09.840 "r_mbytes_per_sec": 0, 00:16:09.840 "w_mbytes_per_sec": 0 00:16:09.840 }, 00:16:09.840 "claimed": true, 00:16:09.840 "claim_type": "exclusive_write", 00:16:09.840 "zoned": false, 00:16:09.840 "supported_io_types": { 00:16:09.840 "read": true, 00:16:09.840 "write": true, 00:16:09.840 "unmap": true, 00:16:09.840 "flush": true, 00:16:09.840 "reset": true, 00:16:09.840 "nvme_admin": false, 00:16:09.840 "nvme_io": false, 00:16:09.840 "nvme_io_md": false, 00:16:09.840 "write_zeroes": true, 00:16:09.840 "zcopy": true, 00:16:09.840 "get_zone_info": false, 00:16:09.840 "zone_management": false, 00:16:09.840 "zone_append": false, 00:16:09.840 "compare": false, 00:16:09.840 "compare_and_write": false, 00:16:09.840 "abort": true, 00:16:09.840 "seek_hole": false, 00:16:09.840 "seek_data": false, 00:16:09.840 "copy": true, 00:16:09.840 "nvme_iov_md": false 00:16:09.840 }, 00:16:09.840 "memory_domains": [ 00:16:09.840 { 00:16:09.840 "dma_device_id": "system", 00:16:09.840 "dma_device_type": 1 00:16:09.840 }, 00:16:09.840 { 00:16:09.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.840 "dma_device_type": 2 00:16:09.840 } 00:16:09.840 ], 00:16:09.840 "driver_specific": {} 00:16:09.840 } 00:16:09.840 ] 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.840 11:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.098 11:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.098 "name": "Existed_Raid", 00:16:10.098 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:10.098 "strip_size_kb": 0, 00:16:10.098 "state": "configuring", 00:16:10.098 "raid_level": "raid1", 00:16:10.098 "superblock": true, 00:16:10.098 "num_base_bdevs": 3, 00:16:10.098 "num_base_bdevs_discovered": 2, 00:16:10.098 "num_base_bdevs_operational": 3, 00:16:10.098 "base_bdevs_list": [ 00:16:10.098 { 00:16:10.098 "name": "BaseBdev1", 00:16:10.098 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:10.098 "is_configured": true, 00:16:10.098 "data_offset": 2048, 00:16:10.098 "data_size": 63488 00:16:10.098 }, 00:16:10.099 { 00:16:10.099 "name": null, 00:16:10.099 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:10.099 "is_configured": false, 00:16:10.099 "data_offset": 2048, 00:16:10.099 "data_size": 63488 00:16:10.099 }, 00:16:10.099 { 00:16:10.099 "name": "BaseBdev3", 00:16:10.099 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:10.099 "is_configured": true, 00:16:10.099 "data_offset": 2048, 00:16:10.099 "data_size": 63488 00:16:10.099 } 00:16:10.099 ] 00:16:10.099 }' 00:16:10.099 11:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.099 11:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.665 11:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.665 11:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:10.924 11:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:10.924 11:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:11.182 [2024-07-25 11:57:57.171441] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.182 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.441 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.441 "name": "Existed_Raid", 00:16:11.441 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:11.441 "strip_size_kb": 0, 00:16:11.441 "state": "configuring", 00:16:11.441 "raid_level": "raid1", 00:16:11.441 "superblock": true, 00:16:11.441 "num_base_bdevs": 3, 00:16:11.441 "num_base_bdevs_discovered": 1, 00:16:11.441 "num_base_bdevs_operational": 3, 00:16:11.441 "base_bdevs_list": [ 00:16:11.441 { 00:16:11.441 "name": "BaseBdev1", 00:16:11.441 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:11.441 "is_configured": true, 00:16:11.441 "data_offset": 2048, 00:16:11.441 "data_size": 63488 00:16:11.441 }, 00:16:11.441 { 00:16:11.441 "name": null, 00:16:11.441 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:11.441 "is_configured": false, 00:16:11.441 "data_offset": 2048, 00:16:11.441 "data_size": 63488 00:16:11.441 }, 00:16:11.441 { 00:16:11.441 "name": null, 00:16:11.441 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:11.441 "is_configured": false, 00:16:11.441 "data_offset": 2048, 00:16:11.441 "data_size": 63488 00:16:11.441 } 00:16:11.441 ] 00:16:11.441 }' 00:16:11.441 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.441 11:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.007 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.007 11:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.007 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:12.007 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:12.266 [2024-07-25 11:57:58.290396] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.266 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.524 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.524 "name": "Existed_Raid", 00:16:12.524 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:12.524 "strip_size_kb": 0, 00:16:12.524 "state": "configuring", 00:16:12.524 "raid_level": "raid1", 00:16:12.524 "superblock": true, 00:16:12.524 "num_base_bdevs": 3, 00:16:12.524 "num_base_bdevs_discovered": 2, 00:16:12.524 "num_base_bdevs_operational": 3, 00:16:12.524 "base_bdevs_list": [ 00:16:12.524 { 00:16:12.524 "name": "BaseBdev1", 00:16:12.524 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:12.524 "is_configured": true, 00:16:12.524 "data_offset": 2048, 00:16:12.524 "data_size": 63488 00:16:12.524 }, 00:16:12.524 { 00:16:12.524 "name": null, 00:16:12.524 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:12.524 "is_configured": false, 00:16:12.524 "data_offset": 2048, 00:16:12.524 "data_size": 63488 00:16:12.524 }, 00:16:12.524 { 00:16:12.524 "name": "BaseBdev3", 00:16:12.524 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:12.524 "is_configured": true, 00:16:12.524 "data_offset": 2048, 00:16:12.524 "data_size": 63488 00:16:12.524 } 00:16:12.524 ] 00:16:12.524 }' 00:16:12.524 11:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.524 11:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.091 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.091 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:13.349 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:13.349 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:13.608 [2024-07-25 11:57:59.497575] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.608 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.866 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.866 "name": "Existed_Raid", 00:16:13.866 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:13.866 "strip_size_kb": 0, 00:16:13.866 "state": "configuring", 00:16:13.866 "raid_level": "raid1", 00:16:13.866 "superblock": true, 00:16:13.866 "num_base_bdevs": 3, 00:16:13.866 "num_base_bdevs_discovered": 1, 00:16:13.866 "num_base_bdevs_operational": 3, 00:16:13.866 "base_bdevs_list": [ 00:16:13.866 { 00:16:13.866 "name": null, 00:16:13.866 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:13.866 "is_configured": false, 00:16:13.866 "data_offset": 2048, 00:16:13.866 "data_size": 63488 00:16:13.866 }, 00:16:13.866 { 00:16:13.866 "name": null, 00:16:13.866 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:13.866 "is_configured": false, 00:16:13.866 "data_offset": 2048, 00:16:13.866 "data_size": 63488 00:16:13.866 }, 00:16:13.866 { 00:16:13.866 "name": "BaseBdev3", 00:16:13.866 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:13.866 "is_configured": true, 00:16:13.866 "data_offset": 2048, 00:16:13.866 "data_size": 63488 00:16:13.866 } 00:16:13.866 ] 00:16:13.866 }' 00:16:13.866 11:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.866 11:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.431 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.431 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:14.431 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:14.431 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:14.688 [2024-07-25 11:58:00.742961] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.688 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.945 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.945 "name": "Existed_Raid", 00:16:14.945 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:14.945 "strip_size_kb": 0, 00:16:14.945 "state": "configuring", 00:16:14.945 "raid_level": "raid1", 00:16:14.945 "superblock": true, 00:16:14.945 "num_base_bdevs": 3, 00:16:14.945 "num_base_bdevs_discovered": 2, 00:16:14.945 "num_base_bdevs_operational": 3, 00:16:14.945 "base_bdevs_list": [ 00:16:14.945 { 00:16:14.945 "name": null, 00:16:14.945 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:14.945 "is_configured": false, 00:16:14.945 "data_offset": 2048, 00:16:14.945 "data_size": 63488 00:16:14.945 }, 00:16:14.945 { 00:16:14.945 "name": "BaseBdev2", 00:16:14.945 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:14.945 "is_configured": true, 00:16:14.945 "data_offset": 2048, 00:16:14.945 "data_size": 63488 00:16:14.945 }, 00:16:14.945 { 00:16:14.945 "name": "BaseBdev3", 00:16:14.945 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:14.945 "is_configured": true, 00:16:14.945 "data_offset": 2048, 00:16:14.945 "data_size": 63488 00:16:14.945 } 00:16:14.945 ] 00:16:14.945 }' 00:16:14.945 11:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.945 11:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.512 11:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.512 11:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:15.791 11:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:15.791 11:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.791 11:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:16.048 11:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 97d61be9-d177-45e1-82fa-e3f82f5087a5 00:16:16.048 [2024-07-25 11:58:02.157777] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:16.048 [2024-07-25 11:58:02.157913] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1807080 00:16:16.048 [2024-07-25 11:58:02.157925] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.048 [2024-07-25 11:58:02.158088] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17fd760 00:16:16.048 [2024-07-25 11:58:02.158207] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1807080 00:16:16.048 [2024-07-25 11:58:02.158217] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1807080 00:16:16.048 [2024-07-25 11:58:02.158301] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.048 NewBaseBdev 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.307 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:16.565 [ 00:16:16.565 { 00:16:16.565 "name": "NewBaseBdev", 00:16:16.565 "aliases": [ 00:16:16.565 "97d61be9-d177-45e1-82fa-e3f82f5087a5" 00:16:16.565 ], 00:16:16.565 "product_name": "Malloc disk", 00:16:16.565 "block_size": 512, 00:16:16.565 "num_blocks": 65536, 00:16:16.565 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:16.565 "assigned_rate_limits": { 00:16:16.565 "rw_ios_per_sec": 0, 00:16:16.565 "rw_mbytes_per_sec": 0, 00:16:16.565 "r_mbytes_per_sec": 0, 00:16:16.565 "w_mbytes_per_sec": 0 00:16:16.565 }, 00:16:16.565 "claimed": true, 00:16:16.565 "claim_type": "exclusive_write", 00:16:16.565 "zoned": false, 00:16:16.565 "supported_io_types": { 00:16:16.565 "read": true, 00:16:16.565 "write": true, 00:16:16.565 "unmap": true, 00:16:16.565 "flush": true, 00:16:16.565 "reset": true, 00:16:16.565 "nvme_admin": false, 00:16:16.565 "nvme_io": false, 00:16:16.565 "nvme_io_md": false, 00:16:16.565 "write_zeroes": true, 00:16:16.565 "zcopy": true, 00:16:16.565 "get_zone_info": false, 00:16:16.565 "zone_management": false, 00:16:16.565 "zone_append": false, 00:16:16.565 "compare": false, 00:16:16.565 "compare_and_write": false, 00:16:16.565 "abort": true, 00:16:16.565 "seek_hole": false, 00:16:16.565 "seek_data": false, 00:16:16.565 "copy": true, 00:16:16.565 "nvme_iov_md": false 00:16:16.565 }, 00:16:16.565 "memory_domains": [ 00:16:16.565 { 00:16:16.565 "dma_device_id": "system", 00:16:16.565 "dma_device_type": 1 00:16:16.565 }, 00:16:16.565 { 00:16:16.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.565 "dma_device_type": 2 00:16:16.565 } 00:16:16.565 ], 00:16:16.565 "driver_specific": {} 00:16:16.565 } 00:16:16.565 ] 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.565 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.823 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.823 "name": "Existed_Raid", 00:16:16.823 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:16.823 "strip_size_kb": 0, 00:16:16.823 "state": "online", 00:16:16.823 "raid_level": "raid1", 00:16:16.823 "superblock": true, 00:16:16.823 "num_base_bdevs": 3, 00:16:16.823 "num_base_bdevs_discovered": 3, 00:16:16.823 "num_base_bdevs_operational": 3, 00:16:16.823 "base_bdevs_list": [ 00:16:16.823 { 00:16:16.823 "name": "NewBaseBdev", 00:16:16.823 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:16.823 "is_configured": true, 00:16:16.823 "data_offset": 2048, 00:16:16.823 "data_size": 63488 00:16:16.823 }, 00:16:16.823 { 00:16:16.823 "name": "BaseBdev2", 00:16:16.823 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:16.823 "is_configured": true, 00:16:16.823 "data_offset": 2048, 00:16:16.823 "data_size": 63488 00:16:16.823 }, 00:16:16.823 { 00:16:16.823 "name": "BaseBdev3", 00:16:16.823 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:16.823 "is_configured": true, 00:16:16.823 "data_offset": 2048, 00:16:16.823 "data_size": 63488 00:16:16.823 } 00:16:16.823 ] 00:16:16.823 }' 00:16:16.823 11:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.823 11:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:17.400 [2024-07-25 11:58:03.493591] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.400 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:17.400 "name": "Existed_Raid", 00:16:17.400 "aliases": [ 00:16:17.400 "3a4372e0-51fb-4ef4-9d17-7981a58b13fa" 00:16:17.400 ], 00:16:17.400 "product_name": "Raid Volume", 00:16:17.400 "block_size": 512, 00:16:17.400 "num_blocks": 63488, 00:16:17.400 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:17.400 "assigned_rate_limits": { 00:16:17.400 "rw_ios_per_sec": 0, 00:16:17.400 "rw_mbytes_per_sec": 0, 00:16:17.400 "r_mbytes_per_sec": 0, 00:16:17.400 "w_mbytes_per_sec": 0 00:16:17.400 }, 00:16:17.400 "claimed": false, 00:16:17.400 "zoned": false, 00:16:17.400 "supported_io_types": { 00:16:17.400 "read": true, 00:16:17.400 "write": true, 00:16:17.400 "unmap": false, 00:16:17.400 "flush": false, 00:16:17.400 "reset": true, 00:16:17.400 "nvme_admin": false, 00:16:17.400 "nvme_io": false, 00:16:17.400 "nvme_io_md": false, 00:16:17.400 "write_zeroes": true, 00:16:17.400 "zcopy": false, 00:16:17.400 "get_zone_info": false, 00:16:17.400 "zone_management": false, 00:16:17.400 "zone_append": false, 00:16:17.400 "compare": false, 00:16:17.400 "compare_and_write": false, 00:16:17.400 "abort": false, 00:16:17.400 "seek_hole": false, 00:16:17.400 "seek_data": false, 00:16:17.400 "copy": false, 00:16:17.400 "nvme_iov_md": false 00:16:17.400 }, 00:16:17.400 "memory_domains": [ 00:16:17.400 { 00:16:17.400 "dma_device_id": "system", 00:16:17.400 "dma_device_type": 1 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.400 "dma_device_type": 2 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "dma_device_id": "system", 00:16:17.400 "dma_device_type": 1 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.400 "dma_device_type": 2 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "dma_device_id": "system", 00:16:17.400 "dma_device_type": 1 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.400 "dma_device_type": 2 00:16:17.400 } 00:16:17.400 ], 00:16:17.400 "driver_specific": { 00:16:17.400 "raid": { 00:16:17.400 "uuid": "3a4372e0-51fb-4ef4-9d17-7981a58b13fa", 00:16:17.400 "strip_size_kb": 0, 00:16:17.400 "state": "online", 00:16:17.400 "raid_level": "raid1", 00:16:17.400 "superblock": true, 00:16:17.400 "num_base_bdevs": 3, 00:16:17.400 "num_base_bdevs_discovered": 3, 00:16:17.400 "num_base_bdevs_operational": 3, 00:16:17.400 "base_bdevs_list": [ 00:16:17.400 { 00:16:17.400 "name": "NewBaseBdev", 00:16:17.400 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:17.400 "is_configured": true, 00:16:17.400 "data_offset": 2048, 00:16:17.400 "data_size": 63488 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "name": "BaseBdev2", 00:16:17.400 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:17.400 "is_configured": true, 00:16:17.400 "data_offset": 2048, 00:16:17.400 "data_size": 63488 00:16:17.400 }, 00:16:17.400 { 00:16:17.400 "name": "BaseBdev3", 00:16:17.400 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:17.400 "is_configured": true, 00:16:17.400 "data_offset": 2048, 00:16:17.400 "data_size": 63488 00:16:17.400 } 00:16:17.400 ] 00:16:17.400 } 00:16:17.400 } 00:16:17.400 }' 00:16:17.661 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.661 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:17.661 BaseBdev2 00:16:17.661 BaseBdev3' 00:16:17.661 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.661 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:17.661 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.919 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:17.919 "name": "NewBaseBdev", 00:16:17.919 "aliases": [ 00:16:17.919 "97d61be9-d177-45e1-82fa-e3f82f5087a5" 00:16:17.919 ], 00:16:17.919 "product_name": "Malloc disk", 00:16:17.919 "block_size": 512, 00:16:17.919 "num_blocks": 65536, 00:16:17.919 "uuid": "97d61be9-d177-45e1-82fa-e3f82f5087a5", 00:16:17.919 "assigned_rate_limits": { 00:16:17.919 "rw_ios_per_sec": 0, 00:16:17.919 "rw_mbytes_per_sec": 0, 00:16:17.919 "r_mbytes_per_sec": 0, 00:16:17.919 "w_mbytes_per_sec": 0 00:16:17.919 }, 00:16:17.919 "claimed": true, 00:16:17.919 "claim_type": "exclusive_write", 00:16:17.919 "zoned": false, 00:16:17.919 "supported_io_types": { 00:16:17.919 "read": true, 00:16:17.919 "write": true, 00:16:17.919 "unmap": true, 00:16:17.919 "flush": true, 00:16:17.919 "reset": true, 00:16:17.919 "nvme_admin": false, 00:16:17.919 "nvme_io": false, 00:16:17.919 "nvme_io_md": false, 00:16:17.919 "write_zeroes": true, 00:16:17.919 "zcopy": true, 00:16:17.919 "get_zone_info": false, 00:16:17.920 "zone_management": false, 00:16:17.920 "zone_append": false, 00:16:17.920 "compare": false, 00:16:17.920 "compare_and_write": false, 00:16:17.920 "abort": true, 00:16:17.920 "seek_hole": false, 00:16:17.920 "seek_data": false, 00:16:17.920 "copy": true, 00:16:17.920 "nvme_iov_md": false 00:16:17.920 }, 00:16:17.920 "memory_domains": [ 00:16:17.920 { 00:16:17.920 "dma_device_id": "system", 00:16:17.920 "dma_device_type": 1 00:16:17.920 }, 00:16:17.920 { 00:16:17.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.920 "dma_device_type": 2 00:16:17.920 } 00:16:17.920 ], 00:16:17.920 "driver_specific": {} 00:16:17.920 }' 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:17.920 11:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.920 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:18.177 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.435 "name": "BaseBdev2", 00:16:18.435 "aliases": [ 00:16:18.435 "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc" 00:16:18.435 ], 00:16:18.435 "product_name": "Malloc disk", 00:16:18.435 "block_size": 512, 00:16:18.435 "num_blocks": 65536, 00:16:18.435 "uuid": "c1a71dcf-0994-44fe-ae6c-f5cc2cecd3dc", 00:16:18.435 "assigned_rate_limits": { 00:16:18.435 "rw_ios_per_sec": 0, 00:16:18.435 "rw_mbytes_per_sec": 0, 00:16:18.435 "r_mbytes_per_sec": 0, 00:16:18.435 "w_mbytes_per_sec": 0 00:16:18.435 }, 00:16:18.435 "claimed": true, 00:16:18.435 "claim_type": "exclusive_write", 00:16:18.435 "zoned": false, 00:16:18.435 "supported_io_types": { 00:16:18.435 "read": true, 00:16:18.435 "write": true, 00:16:18.435 "unmap": true, 00:16:18.435 "flush": true, 00:16:18.435 "reset": true, 00:16:18.435 "nvme_admin": false, 00:16:18.435 "nvme_io": false, 00:16:18.435 "nvme_io_md": false, 00:16:18.435 "write_zeroes": true, 00:16:18.435 "zcopy": true, 00:16:18.435 "get_zone_info": false, 00:16:18.435 "zone_management": false, 00:16:18.435 "zone_append": false, 00:16:18.435 "compare": false, 00:16:18.435 "compare_and_write": false, 00:16:18.435 "abort": true, 00:16:18.435 "seek_hole": false, 00:16:18.435 "seek_data": false, 00:16:18.435 "copy": true, 00:16:18.435 "nvme_iov_md": false 00:16:18.435 }, 00:16:18.435 "memory_domains": [ 00:16:18.435 { 00:16:18.435 "dma_device_id": "system", 00:16:18.435 "dma_device_type": 1 00:16:18.435 }, 00:16:18.435 { 00:16:18.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.435 "dma_device_type": 2 00:16:18.435 } 00:16:18.435 ], 00:16:18.435 "driver_specific": {} 00:16:18.435 }' 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.435 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:18.693 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.951 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.951 "name": "BaseBdev3", 00:16:18.951 "aliases": [ 00:16:18.951 "df8f98ba-5a69-4ad2-aadf-b008a259c0ca" 00:16:18.951 ], 00:16:18.951 "product_name": "Malloc disk", 00:16:18.951 "block_size": 512, 00:16:18.951 "num_blocks": 65536, 00:16:18.951 "uuid": "df8f98ba-5a69-4ad2-aadf-b008a259c0ca", 00:16:18.951 "assigned_rate_limits": { 00:16:18.951 "rw_ios_per_sec": 0, 00:16:18.951 "rw_mbytes_per_sec": 0, 00:16:18.951 "r_mbytes_per_sec": 0, 00:16:18.951 "w_mbytes_per_sec": 0 00:16:18.951 }, 00:16:18.951 "claimed": true, 00:16:18.951 "claim_type": "exclusive_write", 00:16:18.951 "zoned": false, 00:16:18.951 "supported_io_types": { 00:16:18.951 "read": true, 00:16:18.951 "write": true, 00:16:18.951 "unmap": true, 00:16:18.951 "flush": true, 00:16:18.951 "reset": true, 00:16:18.951 "nvme_admin": false, 00:16:18.951 "nvme_io": false, 00:16:18.951 "nvme_io_md": false, 00:16:18.951 "write_zeroes": true, 00:16:18.951 "zcopy": true, 00:16:18.951 "get_zone_info": false, 00:16:18.951 "zone_management": false, 00:16:18.951 "zone_append": false, 00:16:18.951 "compare": false, 00:16:18.951 "compare_and_write": false, 00:16:18.951 "abort": true, 00:16:18.951 "seek_hole": false, 00:16:18.951 "seek_data": false, 00:16:18.951 "copy": true, 00:16:18.951 "nvme_iov_md": false 00:16:18.951 }, 00:16:18.951 "memory_domains": [ 00:16:18.951 { 00:16:18.951 "dma_device_id": "system", 00:16:18.951 "dma_device_type": 1 00:16:18.951 }, 00:16:18.951 { 00:16:18.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.951 "dma_device_type": 2 00:16:18.951 } 00:16:18.951 ], 00:16:18.951 "driver_specific": {} 00:16:18.951 }' 00:16:18.951 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.951 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.951 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.951 11:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.951 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.209 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:19.467 [2024-07-25 11:58:05.454500] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.467 [2024-07-25 11:58:05.454524] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.467 [2024-07-25 11:58:05.454570] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.467 [2024-07-25 11:58:05.454810] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.467 [2024-07-25 11:58:05.454821] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1807080 name Existed_Raid, state offline 00:16:19.467 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4154238 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4154238 ']' 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4154238 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4154238 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4154238' 00:16:19.468 killing process with pid 4154238 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4154238 00:16:19.468 [2024-07-25 11:58:05.531382] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.468 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4154238 00:16:19.468 [2024-07-25 11:58:05.554596] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.726 11:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:19.726 00:16:19.726 real 0m26.321s 00:16:19.726 user 0m48.376s 00:16:19.726 sys 0m4.741s 00:16:19.726 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.726 11:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.726 ************************************ 00:16:19.726 END TEST raid_state_function_test_sb 00:16:19.726 ************************************ 00:16:19.726 11:58:05 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:19.726 11:58:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:19.726 11:58:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.726 11:58:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.726 ************************************ 00:16:19.726 START TEST raid_superblock_test 00:16:19.726 ************************************ 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4159203 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4159203 /var/tmp/spdk-raid.sock 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4159203 ']' 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:19.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.726 11:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.985 [2024-07-25 11:58:05.890558] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:16:19.985 [2024-07-25 11:58:05.890613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159203 ] 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:19.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.985 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:19.985 [2024-07-25 11:58:06.019966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.243 [2024-07-25 11:58:06.106298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.243 [2024-07-25 11:58:06.168577] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.243 [2024-07-25 11:58:06.168604] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.809 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:21.067 malloc1 00:16:21.067 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.326 [2024-07-25 11:58:07.232541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.326 [2024-07-25 11:58:07.232582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.326 [2024-07-25 11:58:07.232600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22362f0 00:16:21.326 [2024-07-25 11:58:07.232612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.326 [2024-07-25 11:58:07.234110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.326 [2024-07-25 11:58:07.234136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.326 pt1 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:21.326 malloc2 00:16:21.326 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.585 [2024-07-25 11:58:07.618124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.585 [2024-07-25 11:58:07.618171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.585 [2024-07-25 11:58:07.618187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22376d0 00:16:21.585 [2024-07-25 11:58:07.618203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.585 [2024-07-25 11:58:07.619655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.585 [2024-07-25 11:58:07.619681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.585 pt2 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.585 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:21.844 malloc3 00:16:21.844 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:22.103 [2024-07-25 11:58:08.051437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:22.103 [2024-07-25 11:58:08.051476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.103 [2024-07-25 11:58:08.051492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d06b0 00:16:22.103 [2024-07-25 11:58:08.051504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.104 [2024-07-25 11:58:08.052807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.104 [2024-07-25 11:58:08.052832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:22.104 pt3 00:16:22.104 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:22.104 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:22.104 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:22.104 [2024-07-25 11:58:08.211881] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.104 [2024-07-25 11:58:08.213015] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.104 [2024-07-25 11:58:08.213065] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:22.104 [2024-07-25 11:58:08.213211] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x23d0cb0 00:16:22.104 [2024-07-25 11:58:08.213222] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.104 [2024-07-25 11:58:08.213395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23d05a0 00:16:22.104 [2024-07-25 11:58:08.213532] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23d0cb0 00:16:22.104 [2024-07-25 11:58:08.213541] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23d0cb0 00:16:22.104 [2024-07-25 11:58:08.213627] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.363 "name": "raid_bdev1", 00:16:22.363 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:22.363 "strip_size_kb": 0, 00:16:22.363 "state": "online", 00:16:22.363 "raid_level": "raid1", 00:16:22.363 "superblock": true, 00:16:22.363 "num_base_bdevs": 3, 00:16:22.363 "num_base_bdevs_discovered": 3, 00:16:22.363 "num_base_bdevs_operational": 3, 00:16:22.363 "base_bdevs_list": [ 00:16:22.363 { 00:16:22.363 "name": "pt1", 00:16:22.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.363 "is_configured": true, 00:16:22.363 "data_offset": 2048, 00:16:22.363 "data_size": 63488 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "name": "pt2", 00:16:22.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.363 "is_configured": true, 00:16:22.363 "data_offset": 2048, 00:16:22.363 "data_size": 63488 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "name": "pt3", 00:16:22.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.363 "is_configured": true, 00:16:22.363 "data_offset": 2048, 00:16:22.363 "data_size": 63488 00:16:22.363 } 00:16:22.363 ] 00:16:22.363 }' 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.363 11:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:22.931 11:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:23.190 [2024-07-25 11:58:09.166625] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.190 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:23.190 "name": "raid_bdev1", 00:16:23.190 "aliases": [ 00:16:23.190 "19631f40-7e36-4d0e-82cd-8ad958269584" 00:16:23.190 ], 00:16:23.190 "product_name": "Raid Volume", 00:16:23.190 "block_size": 512, 00:16:23.190 "num_blocks": 63488, 00:16:23.190 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:23.190 "assigned_rate_limits": { 00:16:23.190 "rw_ios_per_sec": 0, 00:16:23.190 "rw_mbytes_per_sec": 0, 00:16:23.190 "r_mbytes_per_sec": 0, 00:16:23.190 "w_mbytes_per_sec": 0 00:16:23.190 }, 00:16:23.190 "claimed": false, 00:16:23.190 "zoned": false, 00:16:23.190 "supported_io_types": { 00:16:23.190 "read": true, 00:16:23.190 "write": true, 00:16:23.190 "unmap": false, 00:16:23.190 "flush": false, 00:16:23.190 "reset": true, 00:16:23.190 "nvme_admin": false, 00:16:23.190 "nvme_io": false, 00:16:23.190 "nvme_io_md": false, 00:16:23.190 "write_zeroes": true, 00:16:23.190 "zcopy": false, 00:16:23.190 "get_zone_info": false, 00:16:23.190 "zone_management": false, 00:16:23.190 "zone_append": false, 00:16:23.190 "compare": false, 00:16:23.190 "compare_and_write": false, 00:16:23.190 "abort": false, 00:16:23.190 "seek_hole": false, 00:16:23.190 "seek_data": false, 00:16:23.190 "copy": false, 00:16:23.190 "nvme_iov_md": false 00:16:23.190 }, 00:16:23.190 "memory_domains": [ 00:16:23.190 { 00:16:23.190 "dma_device_id": "system", 00:16:23.190 "dma_device_type": 1 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.190 "dma_device_type": 2 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "dma_device_id": "system", 00:16:23.190 "dma_device_type": 1 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.190 "dma_device_type": 2 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "dma_device_id": "system", 00:16:23.190 "dma_device_type": 1 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.190 "dma_device_type": 2 00:16:23.190 } 00:16:23.190 ], 00:16:23.190 "driver_specific": { 00:16:23.190 "raid": { 00:16:23.190 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:23.190 "strip_size_kb": 0, 00:16:23.190 "state": "online", 00:16:23.190 "raid_level": "raid1", 00:16:23.190 "superblock": true, 00:16:23.190 "num_base_bdevs": 3, 00:16:23.190 "num_base_bdevs_discovered": 3, 00:16:23.190 "num_base_bdevs_operational": 3, 00:16:23.190 "base_bdevs_list": [ 00:16:23.190 { 00:16:23.190 "name": "pt1", 00:16:23.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.190 "is_configured": true, 00:16:23.190 "data_offset": 2048, 00:16:23.190 "data_size": 63488 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "name": "pt2", 00:16:23.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.190 "is_configured": true, 00:16:23.190 "data_offset": 2048, 00:16:23.190 "data_size": 63488 00:16:23.190 }, 00:16:23.190 { 00:16:23.190 "name": "pt3", 00:16:23.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.190 "is_configured": true, 00:16:23.190 "data_offset": 2048, 00:16:23.190 "data_size": 63488 00:16:23.190 } 00:16:23.190 ] 00:16:23.190 } 00:16:23.190 } 00:16:23.190 }' 00:16:23.190 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.190 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:23.190 pt2 00:16:23.190 pt3' 00:16:23.190 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.190 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:23.190 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.450 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.450 "name": "pt1", 00:16:23.450 "aliases": [ 00:16:23.450 "00000000-0000-0000-0000-000000000001" 00:16:23.450 ], 00:16:23.450 "product_name": "passthru", 00:16:23.450 "block_size": 512, 00:16:23.450 "num_blocks": 65536, 00:16:23.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.450 "assigned_rate_limits": { 00:16:23.450 "rw_ios_per_sec": 0, 00:16:23.450 "rw_mbytes_per_sec": 0, 00:16:23.450 "r_mbytes_per_sec": 0, 00:16:23.450 "w_mbytes_per_sec": 0 00:16:23.450 }, 00:16:23.450 "claimed": true, 00:16:23.450 "claim_type": "exclusive_write", 00:16:23.450 "zoned": false, 00:16:23.450 "supported_io_types": { 00:16:23.450 "read": true, 00:16:23.450 "write": true, 00:16:23.450 "unmap": true, 00:16:23.450 "flush": true, 00:16:23.450 "reset": true, 00:16:23.450 "nvme_admin": false, 00:16:23.450 "nvme_io": false, 00:16:23.450 "nvme_io_md": false, 00:16:23.450 "write_zeroes": true, 00:16:23.450 "zcopy": true, 00:16:23.450 "get_zone_info": false, 00:16:23.450 "zone_management": false, 00:16:23.450 "zone_append": false, 00:16:23.450 "compare": false, 00:16:23.450 "compare_and_write": false, 00:16:23.450 "abort": true, 00:16:23.450 "seek_hole": false, 00:16:23.450 "seek_data": false, 00:16:23.450 "copy": true, 00:16:23.450 "nvme_iov_md": false 00:16:23.450 }, 00:16:23.450 "memory_domains": [ 00:16:23.450 { 00:16:23.450 "dma_device_id": "system", 00:16:23.450 "dma_device_type": 1 00:16:23.450 }, 00:16:23.450 { 00:16:23.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.450 "dma_device_type": 2 00:16:23.450 } 00:16:23.450 ], 00:16:23.450 "driver_specific": { 00:16:23.450 "passthru": { 00:16:23.450 "name": "pt1", 00:16:23.450 "base_bdev_name": "malloc1" 00:16:23.450 } 00:16:23.450 } 00:16:23.450 }' 00:16:23.450 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.450 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.450 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.450 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:23.709 11:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.968 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.968 "name": "pt2", 00:16:23.968 "aliases": [ 00:16:23.968 "00000000-0000-0000-0000-000000000002" 00:16:23.968 ], 00:16:23.968 "product_name": "passthru", 00:16:23.968 "block_size": 512, 00:16:23.968 "num_blocks": 65536, 00:16:23.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.968 "assigned_rate_limits": { 00:16:23.968 "rw_ios_per_sec": 0, 00:16:23.968 "rw_mbytes_per_sec": 0, 00:16:23.968 "r_mbytes_per_sec": 0, 00:16:23.968 "w_mbytes_per_sec": 0 00:16:23.968 }, 00:16:23.968 "claimed": true, 00:16:23.968 "claim_type": "exclusive_write", 00:16:23.968 "zoned": false, 00:16:23.968 "supported_io_types": { 00:16:23.968 "read": true, 00:16:23.968 "write": true, 00:16:23.968 "unmap": true, 00:16:23.968 "flush": true, 00:16:23.968 "reset": true, 00:16:23.968 "nvme_admin": false, 00:16:23.968 "nvme_io": false, 00:16:23.968 "nvme_io_md": false, 00:16:23.968 "write_zeroes": true, 00:16:23.968 "zcopy": true, 00:16:23.968 "get_zone_info": false, 00:16:23.968 "zone_management": false, 00:16:23.968 "zone_append": false, 00:16:23.968 "compare": false, 00:16:23.968 "compare_and_write": false, 00:16:23.968 "abort": true, 00:16:23.968 "seek_hole": false, 00:16:23.968 "seek_data": false, 00:16:23.968 "copy": true, 00:16:23.968 "nvme_iov_md": false 00:16:23.968 }, 00:16:23.968 "memory_domains": [ 00:16:23.968 { 00:16:23.968 "dma_device_id": "system", 00:16:23.968 "dma_device_type": 1 00:16:23.968 }, 00:16:23.968 { 00:16:23.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.968 "dma_device_type": 2 00:16:23.968 } 00:16:23.968 ], 00:16:23.968 "driver_specific": { 00:16:23.968 "passthru": { 00:16:23.968 "name": "pt2", 00:16:23.968 "base_bdev_name": "malloc2" 00:16:23.968 } 00:16:23.968 } 00:16:23.968 }' 00:16:23.968 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.968 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.227 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.527 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.527 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.527 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:24.527 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.527 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.527 "name": "pt3", 00:16:24.527 "aliases": [ 00:16:24.527 "00000000-0000-0000-0000-000000000003" 00:16:24.527 ], 00:16:24.527 "product_name": "passthru", 00:16:24.527 "block_size": 512, 00:16:24.527 "num_blocks": 65536, 00:16:24.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.527 "assigned_rate_limits": { 00:16:24.527 "rw_ios_per_sec": 0, 00:16:24.527 "rw_mbytes_per_sec": 0, 00:16:24.527 "r_mbytes_per_sec": 0, 00:16:24.527 "w_mbytes_per_sec": 0 00:16:24.527 }, 00:16:24.527 "claimed": true, 00:16:24.527 "claim_type": "exclusive_write", 00:16:24.527 "zoned": false, 00:16:24.527 "supported_io_types": { 00:16:24.527 "read": true, 00:16:24.527 "write": true, 00:16:24.527 "unmap": true, 00:16:24.527 "flush": true, 00:16:24.527 "reset": true, 00:16:24.527 "nvme_admin": false, 00:16:24.527 "nvme_io": false, 00:16:24.527 "nvme_io_md": false, 00:16:24.527 "write_zeroes": true, 00:16:24.527 "zcopy": true, 00:16:24.527 "get_zone_info": false, 00:16:24.527 "zone_management": false, 00:16:24.527 "zone_append": false, 00:16:24.527 "compare": false, 00:16:24.527 "compare_and_write": false, 00:16:24.527 "abort": true, 00:16:24.527 "seek_hole": false, 00:16:24.527 "seek_data": false, 00:16:24.527 "copy": true, 00:16:24.527 "nvme_iov_md": false 00:16:24.527 }, 00:16:24.527 "memory_domains": [ 00:16:24.527 { 00:16:24.527 "dma_device_id": "system", 00:16:24.527 "dma_device_type": 1 00:16:24.527 }, 00:16:24.527 { 00:16:24.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.527 "dma_device_type": 2 00:16:24.527 } 00:16:24.527 ], 00:16:24.527 "driver_specific": { 00:16:24.527 "passthru": { 00:16:24.527 "name": "pt3", 00:16:24.527 "base_bdev_name": "malloc3" 00:16:24.527 } 00:16:24.527 } 00:16:24.527 }' 00:16:24.527 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.786 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.045 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.045 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.045 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.045 11:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:25.305 [2024-07-25 11:58:11.167895] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.305 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=19631f40-7e36-4d0e-82cd-8ad958269584 00:16:25.305 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 19631f40-7e36-4d0e-82cd-8ad958269584 ']' 00:16:25.305 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:25.305 [2024-07-25 11:58:11.388233] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.305 [2024-07-25 11:58:11.388250] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.305 [2024-07-25 11:58:11.388295] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.305 [2024-07-25 11:58:11.388360] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.305 [2024-07-25 11:58:11.388371] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23d0cb0 name raid_bdev1, state offline 00:16:25.305 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.305 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:25.564 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:25.564 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:25.564 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.564 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:25.823 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.823 11:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:26.083 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.083 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:26.343 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:26.343 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:26.602 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:26.864 [2024-07-25 11:58:12.755781] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:26.864 [2024-07-25 11:58:12.757028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:26.864 [2024-07-25 11:58:12.757067] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:26.864 [2024-07-25 11:58:12.757107] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:26.864 [2024-07-25 11:58:12.757149] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:26.864 [2024-07-25 11:58:12.757171] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:26.864 [2024-07-25 11:58:12.757188] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.864 [2024-07-25 11:58:12.757197] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23d9d50 name raid_bdev1, state configuring 00:16:26.864 request: 00:16:26.864 { 00:16:26.864 "name": "raid_bdev1", 00:16:26.864 "raid_level": "raid1", 00:16:26.864 "base_bdevs": [ 00:16:26.864 "malloc1", 00:16:26.864 "malloc2", 00:16:26.864 "malloc3" 00:16:26.864 ], 00:16:26.864 "superblock": false, 00:16:26.864 "method": "bdev_raid_create", 00:16:26.864 "req_id": 1 00:16:26.864 } 00:16:26.864 Got JSON-RPC error response 00:16:26.864 response: 00:16:26.864 { 00:16:26.864 "code": -17, 00:16:26.864 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:26.865 } 00:16:26.865 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:26.865 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.865 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.865 11:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.865 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.865 11:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.124 [2024-07-25 11:58:13.212926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.124 [2024-07-25 11:58:13.212965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.124 [2024-07-25 11:58:13.212980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23cdd00 00:16:27.124 [2024-07-25 11:58:13.212991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.124 [2024-07-25 11:58:13.214455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.124 [2024-07-25 11:58:13.214481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.124 [2024-07-25 11:58:13.214543] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:27.124 [2024-07-25 11:58:13.214570] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.124 pt1 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.124 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.383 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.383 "name": "raid_bdev1", 00:16:27.383 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:27.383 "strip_size_kb": 0, 00:16:27.383 "state": "configuring", 00:16:27.383 "raid_level": "raid1", 00:16:27.383 "superblock": true, 00:16:27.383 "num_base_bdevs": 3, 00:16:27.383 "num_base_bdevs_discovered": 1, 00:16:27.383 "num_base_bdevs_operational": 3, 00:16:27.383 "base_bdevs_list": [ 00:16:27.383 { 00:16:27.383 "name": "pt1", 00:16:27.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.383 "is_configured": true, 00:16:27.383 "data_offset": 2048, 00:16:27.383 "data_size": 63488 00:16:27.383 }, 00:16:27.383 { 00:16:27.383 "name": null, 00:16:27.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.383 "is_configured": false, 00:16:27.383 "data_offset": 2048, 00:16:27.383 "data_size": 63488 00:16:27.383 }, 00:16:27.383 { 00:16:27.383 "name": null, 00:16:27.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.383 "is_configured": false, 00:16:27.383 "data_offset": 2048, 00:16:27.383 "data_size": 63488 00:16:27.383 } 00:16:27.383 ] 00:16:27.383 }' 00:16:27.383 11:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.383 11:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.951 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:16:27.951 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.212 [2024-07-25 11:58:14.235634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.212 [2024-07-25 11:58:14.235678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.212 [2024-07-25 11:58:14.235696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x222cc20 00:16:28.212 [2024-07-25 11:58:14.235708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.212 [2024-07-25 11:58:14.236015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.212 [2024-07-25 11:58:14.236031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.212 [2024-07-25 11:58:14.236086] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.212 [2024-07-25 11:58:14.236104] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.212 pt2 00:16:28.212 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:28.535 [2024-07-25 11:58:14.460249] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.535 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.792 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.792 "name": "raid_bdev1", 00:16:28.792 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:28.792 "strip_size_kb": 0, 00:16:28.792 "state": "configuring", 00:16:28.792 "raid_level": "raid1", 00:16:28.793 "superblock": true, 00:16:28.793 "num_base_bdevs": 3, 00:16:28.793 "num_base_bdevs_discovered": 1, 00:16:28.793 "num_base_bdevs_operational": 3, 00:16:28.793 "base_bdevs_list": [ 00:16:28.793 { 00:16:28.793 "name": "pt1", 00:16:28.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.793 "is_configured": true, 00:16:28.793 "data_offset": 2048, 00:16:28.793 "data_size": 63488 00:16:28.793 }, 00:16:28.793 { 00:16:28.793 "name": null, 00:16:28.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.793 "is_configured": false, 00:16:28.793 "data_offset": 2048, 00:16:28.793 "data_size": 63488 00:16:28.793 }, 00:16:28.793 { 00:16:28.793 "name": null, 00:16:28.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.793 "is_configured": false, 00:16:28.793 "data_offset": 2048, 00:16:28.793 "data_size": 63488 00:16:28.793 } 00:16:28.793 ] 00:16:28.793 }' 00:16:28.793 11:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.793 11:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.359 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:29.359 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:29.359 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.618 [2024-07-25 11:58:15.478930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.618 [2024-07-25 11:58:15.478977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.618 [2024-07-25 11:58:15.479001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x222d4d0 00:16:29.618 [2024-07-25 11:58:15.479013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.618 [2024-07-25 11:58:15.479378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.618 [2024-07-25 11:58:15.479398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.618 [2024-07-25 11:58:15.479460] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:29.618 [2024-07-25 11:58:15.479478] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.618 pt2 00:16:29.618 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:29.618 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:29.618 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:29.618 [2024-07-25 11:58:15.707542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:29.619 [2024-07-25 11:58:15.707580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.619 [2024-07-25 11:58:15.707597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x222d9e0 00:16:29.619 [2024-07-25 11:58:15.707608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.619 [2024-07-25 11:58:15.707899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.619 [2024-07-25 11:58:15.707915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:29.619 [2024-07-25 11:58:15.707970] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:29.619 [2024-07-25 11:58:15.707989] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.619 [2024-07-25 11:58:15.708088] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x222cf40 00:16:29.619 [2024-07-25 11:58:15.708098] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:29.619 [2024-07-25 11:58:15.708261] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23cfc60 00:16:29.619 [2024-07-25 11:58:15.708384] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x222cf40 00:16:29.619 [2024-07-25 11:58:15.708394] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x222cf40 00:16:29.619 [2024-07-25 11:58:15.708481] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.619 pt3 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.619 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.878 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.878 "name": "raid_bdev1", 00:16:29.878 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:29.878 "strip_size_kb": 0, 00:16:29.878 "state": "online", 00:16:29.878 "raid_level": "raid1", 00:16:29.878 "superblock": true, 00:16:29.878 "num_base_bdevs": 3, 00:16:29.878 "num_base_bdevs_discovered": 3, 00:16:29.878 "num_base_bdevs_operational": 3, 00:16:29.878 "base_bdevs_list": [ 00:16:29.878 { 00:16:29.878 "name": "pt1", 00:16:29.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.878 "is_configured": true, 00:16:29.878 "data_offset": 2048, 00:16:29.878 "data_size": 63488 00:16:29.878 }, 00:16:29.878 { 00:16:29.878 "name": "pt2", 00:16:29.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.878 "is_configured": true, 00:16:29.878 "data_offset": 2048, 00:16:29.878 "data_size": 63488 00:16:29.878 }, 00:16:29.878 { 00:16:29.878 "name": "pt3", 00:16:29.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.878 "is_configured": true, 00:16:29.878 "data_offset": 2048, 00:16:29.878 "data_size": 63488 00:16:29.878 } 00:16:29.878 ] 00:16:29.878 }' 00:16:29.878 11:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.878 11:58:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:30.446 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:30.705 [2024-07-25 11:58:16.670325] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.705 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.705 "name": "raid_bdev1", 00:16:30.705 "aliases": [ 00:16:30.705 "19631f40-7e36-4d0e-82cd-8ad958269584" 00:16:30.705 ], 00:16:30.705 "product_name": "Raid Volume", 00:16:30.705 "block_size": 512, 00:16:30.705 "num_blocks": 63488, 00:16:30.705 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:30.705 "assigned_rate_limits": { 00:16:30.705 "rw_ios_per_sec": 0, 00:16:30.705 "rw_mbytes_per_sec": 0, 00:16:30.705 "r_mbytes_per_sec": 0, 00:16:30.705 "w_mbytes_per_sec": 0 00:16:30.705 }, 00:16:30.705 "claimed": false, 00:16:30.705 "zoned": false, 00:16:30.705 "supported_io_types": { 00:16:30.705 "read": true, 00:16:30.705 "write": true, 00:16:30.705 "unmap": false, 00:16:30.705 "flush": false, 00:16:30.705 "reset": true, 00:16:30.705 "nvme_admin": false, 00:16:30.705 "nvme_io": false, 00:16:30.705 "nvme_io_md": false, 00:16:30.705 "write_zeroes": true, 00:16:30.705 "zcopy": false, 00:16:30.705 "get_zone_info": false, 00:16:30.705 "zone_management": false, 00:16:30.705 "zone_append": false, 00:16:30.705 "compare": false, 00:16:30.705 "compare_and_write": false, 00:16:30.705 "abort": false, 00:16:30.705 "seek_hole": false, 00:16:30.705 "seek_data": false, 00:16:30.705 "copy": false, 00:16:30.705 "nvme_iov_md": false 00:16:30.705 }, 00:16:30.705 "memory_domains": [ 00:16:30.705 { 00:16:30.705 "dma_device_id": "system", 00:16:30.705 "dma_device_type": 1 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.705 "dma_device_type": 2 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "dma_device_id": "system", 00:16:30.705 "dma_device_type": 1 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.705 "dma_device_type": 2 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "dma_device_id": "system", 00:16:30.705 "dma_device_type": 1 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.705 "dma_device_type": 2 00:16:30.705 } 00:16:30.705 ], 00:16:30.705 "driver_specific": { 00:16:30.705 "raid": { 00:16:30.705 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:30.705 "strip_size_kb": 0, 00:16:30.705 "state": "online", 00:16:30.705 "raid_level": "raid1", 00:16:30.705 "superblock": true, 00:16:30.705 "num_base_bdevs": 3, 00:16:30.705 "num_base_bdevs_discovered": 3, 00:16:30.705 "num_base_bdevs_operational": 3, 00:16:30.705 "base_bdevs_list": [ 00:16:30.705 { 00:16:30.705 "name": "pt1", 00:16:30.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.705 "is_configured": true, 00:16:30.705 "data_offset": 2048, 00:16:30.705 "data_size": 63488 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "name": "pt2", 00:16:30.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.705 "is_configured": true, 00:16:30.705 "data_offset": 2048, 00:16:30.705 "data_size": 63488 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "name": "pt3", 00:16:30.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.705 "is_configured": true, 00:16:30.705 "data_offset": 2048, 00:16:30.705 "data_size": 63488 00:16:30.705 } 00:16:30.705 ] 00:16:30.705 } 00:16:30.705 } 00:16:30.705 }' 00:16:30.706 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.706 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:30.706 pt2 00:16:30.706 pt3' 00:16:30.706 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.706 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:30.706 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.965 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.965 "name": "pt1", 00:16:30.965 "aliases": [ 00:16:30.965 "00000000-0000-0000-0000-000000000001" 00:16:30.965 ], 00:16:30.965 "product_name": "passthru", 00:16:30.965 "block_size": 512, 00:16:30.965 "num_blocks": 65536, 00:16:30.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.965 "assigned_rate_limits": { 00:16:30.965 "rw_ios_per_sec": 0, 00:16:30.965 "rw_mbytes_per_sec": 0, 00:16:30.965 "r_mbytes_per_sec": 0, 00:16:30.965 "w_mbytes_per_sec": 0 00:16:30.965 }, 00:16:30.965 "claimed": true, 00:16:30.965 "claim_type": "exclusive_write", 00:16:30.965 "zoned": false, 00:16:30.965 "supported_io_types": { 00:16:30.965 "read": true, 00:16:30.965 "write": true, 00:16:30.965 "unmap": true, 00:16:30.965 "flush": true, 00:16:30.965 "reset": true, 00:16:30.965 "nvme_admin": false, 00:16:30.965 "nvme_io": false, 00:16:30.965 "nvme_io_md": false, 00:16:30.965 "write_zeroes": true, 00:16:30.965 "zcopy": true, 00:16:30.965 "get_zone_info": false, 00:16:30.965 "zone_management": false, 00:16:30.965 "zone_append": false, 00:16:30.965 "compare": false, 00:16:30.965 "compare_and_write": false, 00:16:30.965 "abort": true, 00:16:30.965 "seek_hole": false, 00:16:30.965 "seek_data": false, 00:16:30.965 "copy": true, 00:16:30.965 "nvme_iov_md": false 00:16:30.965 }, 00:16:30.965 "memory_domains": [ 00:16:30.965 { 00:16:30.965 "dma_device_id": "system", 00:16:30.965 "dma_device_type": 1 00:16:30.965 }, 00:16:30.965 { 00:16:30.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.965 "dma_device_type": 2 00:16:30.965 } 00:16:30.965 ], 00:16:30.965 "driver_specific": { 00:16:30.965 "passthru": { 00:16:30.965 "name": "pt1", 00:16:30.965 "base_bdev_name": "malloc1" 00:16:30.965 } 00:16:30.965 } 00:16:30.965 }' 00:16:30.965 11:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.965 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.965 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.965 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:31.224 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.483 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.483 "name": "pt2", 00:16:31.483 "aliases": [ 00:16:31.483 "00000000-0000-0000-0000-000000000002" 00:16:31.483 ], 00:16:31.483 "product_name": "passthru", 00:16:31.483 "block_size": 512, 00:16:31.483 "num_blocks": 65536, 00:16:31.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.483 "assigned_rate_limits": { 00:16:31.483 "rw_ios_per_sec": 0, 00:16:31.483 "rw_mbytes_per_sec": 0, 00:16:31.483 "r_mbytes_per_sec": 0, 00:16:31.483 "w_mbytes_per_sec": 0 00:16:31.483 }, 00:16:31.483 "claimed": true, 00:16:31.483 "claim_type": "exclusive_write", 00:16:31.483 "zoned": false, 00:16:31.483 "supported_io_types": { 00:16:31.483 "read": true, 00:16:31.483 "write": true, 00:16:31.483 "unmap": true, 00:16:31.483 "flush": true, 00:16:31.483 "reset": true, 00:16:31.483 "nvme_admin": false, 00:16:31.483 "nvme_io": false, 00:16:31.483 "nvme_io_md": false, 00:16:31.483 "write_zeroes": true, 00:16:31.483 "zcopy": true, 00:16:31.483 "get_zone_info": false, 00:16:31.483 "zone_management": false, 00:16:31.483 "zone_append": false, 00:16:31.483 "compare": false, 00:16:31.483 "compare_and_write": false, 00:16:31.483 "abort": true, 00:16:31.483 "seek_hole": false, 00:16:31.483 "seek_data": false, 00:16:31.483 "copy": true, 00:16:31.483 "nvme_iov_md": false 00:16:31.483 }, 00:16:31.483 "memory_domains": [ 00:16:31.483 { 00:16:31.483 "dma_device_id": "system", 00:16:31.483 "dma_device_type": 1 00:16:31.483 }, 00:16:31.483 { 00:16:31.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.483 "dma_device_type": 2 00:16:31.483 } 00:16:31.483 ], 00:16:31.483 "driver_specific": { 00:16:31.483 "passthru": { 00:16:31.483 "name": "pt2", 00:16:31.483 "base_bdev_name": "malloc2" 00:16:31.483 } 00:16:31.483 } 00:16:31.483 }' 00:16:31.483 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.483 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.742 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.001 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.001 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.001 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.001 11:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.260 "name": "pt3", 00:16:32.260 "aliases": [ 00:16:32.260 "00000000-0000-0000-0000-000000000003" 00:16:32.260 ], 00:16:32.260 "product_name": "passthru", 00:16:32.260 "block_size": 512, 00:16:32.260 "num_blocks": 65536, 00:16:32.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.260 "assigned_rate_limits": { 00:16:32.260 "rw_ios_per_sec": 0, 00:16:32.260 "rw_mbytes_per_sec": 0, 00:16:32.260 "r_mbytes_per_sec": 0, 00:16:32.260 "w_mbytes_per_sec": 0 00:16:32.260 }, 00:16:32.260 "claimed": true, 00:16:32.260 "claim_type": "exclusive_write", 00:16:32.260 "zoned": false, 00:16:32.260 "supported_io_types": { 00:16:32.260 "read": true, 00:16:32.260 "write": true, 00:16:32.260 "unmap": true, 00:16:32.260 "flush": true, 00:16:32.260 "reset": true, 00:16:32.260 "nvme_admin": false, 00:16:32.260 "nvme_io": false, 00:16:32.260 "nvme_io_md": false, 00:16:32.260 "write_zeroes": true, 00:16:32.260 "zcopy": true, 00:16:32.260 "get_zone_info": false, 00:16:32.260 "zone_management": false, 00:16:32.260 "zone_append": false, 00:16:32.260 "compare": false, 00:16:32.260 "compare_and_write": false, 00:16:32.260 "abort": true, 00:16:32.260 "seek_hole": false, 00:16:32.260 "seek_data": false, 00:16:32.260 "copy": true, 00:16:32.260 "nvme_iov_md": false 00:16:32.260 }, 00:16:32.260 "memory_domains": [ 00:16:32.260 { 00:16:32.260 "dma_device_id": "system", 00:16:32.260 "dma_device_type": 1 00:16:32.260 }, 00:16:32.260 { 00:16:32.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.260 "dma_device_type": 2 00:16:32.260 } 00:16:32.260 ], 00:16:32.260 "driver_specific": { 00:16:32.260 "passthru": { 00:16:32.260 "name": "pt3", 00:16:32.260 "base_bdev_name": "malloc3" 00:16:32.260 } 00:16:32.260 } 00:16:32.260 }' 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.260 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.519 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.519 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.519 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.519 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.519 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:32.519 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:32.778 [2024-07-25 11:58:18.683660] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.778 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 19631f40-7e36-4d0e-82cd-8ad958269584 '!=' 19631f40-7e36-4d0e-82cd-8ad958269584 ']' 00:16:32.778 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:32.778 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:32.778 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:32.778 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:33.037 [2024-07-25 11:58:18.904005] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.037 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.038 11:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.038 11:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.038 "name": "raid_bdev1", 00:16:33.038 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:33.038 "strip_size_kb": 0, 00:16:33.038 "state": "online", 00:16:33.038 "raid_level": "raid1", 00:16:33.038 "superblock": true, 00:16:33.038 "num_base_bdevs": 3, 00:16:33.038 "num_base_bdevs_discovered": 2, 00:16:33.038 "num_base_bdevs_operational": 2, 00:16:33.038 "base_bdevs_list": [ 00:16:33.038 { 00:16:33.038 "name": null, 00:16:33.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.038 "is_configured": false, 00:16:33.038 "data_offset": 2048, 00:16:33.038 "data_size": 63488 00:16:33.038 }, 00:16:33.038 { 00:16:33.038 "name": "pt2", 00:16:33.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.038 "is_configured": true, 00:16:33.038 "data_offset": 2048, 00:16:33.038 "data_size": 63488 00:16:33.038 }, 00:16:33.038 { 00:16:33.038 "name": "pt3", 00:16:33.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.038 "is_configured": true, 00:16:33.038 "data_offset": 2048, 00:16:33.038 "data_size": 63488 00:16:33.038 } 00:16:33.038 ] 00:16:33.038 }' 00:16:33.038 11:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.038 11:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.606 11:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:33.865 [2024-07-25 11:58:19.918758] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.865 [2024-07-25 11:58:19.918785] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.865 [2024-07-25 11:58:19.918834] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.865 [2024-07-25 11:58:19.918886] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.865 [2024-07-25 11:58:19.918897] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x222cf40 name raid_bdev1, state offline 00:16:33.865 11:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.865 11:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:34.124 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:34.124 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:34.124 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:34.124 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:34.124 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:34.383 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:34.383 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:34.383 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:34.642 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:34.643 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:34.643 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:34.643 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:34.643 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.902 [2024-07-25 11:58:20.845179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.902 [2024-07-25 11:58:20.845221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.902 [2024-07-25 11:58:20.845238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d93f0 00:16:34.902 [2024-07-25 11:58:20.845249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.902 [2024-07-25 11:58:20.846723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.902 [2024-07-25 11:58:20.846748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.902 [2024-07-25 11:58:20.846806] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:34.902 [2024-07-25 11:58:20.846835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.902 pt2 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.902 11:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.161 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.161 "name": "raid_bdev1", 00:16:35.161 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:35.161 "strip_size_kb": 0, 00:16:35.161 "state": "configuring", 00:16:35.161 "raid_level": "raid1", 00:16:35.161 "superblock": true, 00:16:35.161 "num_base_bdevs": 3, 00:16:35.161 "num_base_bdevs_discovered": 1, 00:16:35.161 "num_base_bdevs_operational": 2, 00:16:35.161 "base_bdevs_list": [ 00:16:35.161 { 00:16:35.161 "name": null, 00:16:35.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.161 "is_configured": false, 00:16:35.161 "data_offset": 2048, 00:16:35.161 "data_size": 63488 00:16:35.161 }, 00:16:35.161 { 00:16:35.161 "name": "pt2", 00:16:35.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.161 "is_configured": true, 00:16:35.161 "data_offset": 2048, 00:16:35.161 "data_size": 63488 00:16:35.161 }, 00:16:35.161 { 00:16:35.161 "name": null, 00:16:35.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.161 "is_configured": false, 00:16:35.161 "data_offset": 2048, 00:16:35.161 "data_size": 63488 00:16:35.161 } 00:16:35.161 ] 00:16:35.161 }' 00:16:35.161 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.161 11:58:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.730 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:35.730 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:35.730 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:16:35.730 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.990 [2024-07-25 11:58:21.887942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.990 [2024-07-25 11:58:21.887992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.990 [2024-07-25 11:58:21.888013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ce370 00:16:35.990 [2024-07-25 11:58:21.888024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.990 [2024-07-25 11:58:21.888374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.990 [2024-07-25 11:58:21.888391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.990 [2024-07-25 11:58:21.888448] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.990 [2024-07-25 11:58:21.888466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.990 [2024-07-25 11:58:21.888554] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x222f430 00:16:35.990 [2024-07-25 11:58:21.888564] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:35.990 [2024-07-25 11:58:21.888717] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23e89c0 00:16:35.990 [2024-07-25 11:58:21.888831] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x222f430 00:16:35.990 [2024-07-25 11:58:21.888840] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x222f430 00:16:35.990 [2024-07-25 11:58:21.888927] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.990 pt3 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.990 11:58:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.249 11:58:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.249 "name": "raid_bdev1", 00:16:36.249 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:36.249 "strip_size_kb": 0, 00:16:36.249 "state": "online", 00:16:36.249 "raid_level": "raid1", 00:16:36.249 "superblock": true, 00:16:36.249 "num_base_bdevs": 3, 00:16:36.249 "num_base_bdevs_discovered": 2, 00:16:36.249 "num_base_bdevs_operational": 2, 00:16:36.249 "base_bdevs_list": [ 00:16:36.249 { 00:16:36.249 "name": null, 00:16:36.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.249 "is_configured": false, 00:16:36.249 "data_offset": 2048, 00:16:36.249 "data_size": 63488 00:16:36.249 }, 00:16:36.249 { 00:16:36.249 "name": "pt2", 00:16:36.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.249 "is_configured": true, 00:16:36.249 "data_offset": 2048, 00:16:36.249 "data_size": 63488 00:16:36.249 }, 00:16:36.249 { 00:16:36.249 "name": "pt3", 00:16:36.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.249 "is_configured": true, 00:16:36.249 "data_offset": 2048, 00:16:36.249 "data_size": 63488 00:16:36.249 } 00:16:36.249 ] 00:16:36.250 }' 00:16:36.250 11:58:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.250 11:58:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.817 11:58:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:36.817 [2024-07-25 11:58:22.902594] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.817 [2024-07-25 11:58:22.902617] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.817 [2024-07-25 11:58:22.902667] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.817 [2024-07-25 11:58:22.902720] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.818 [2024-07-25 11:58:22.902731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x222f430 name raid_bdev1, state offline 00:16:36.818 11:58:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.818 11:58:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:37.077 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:37.077 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:37.077 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:16:37.077 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:16:37.077 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:37.336 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.336 [2024-07-25 11:58:23.439989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.336 [2024-07-25 11:58:23.440036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.336 [2024-07-25 11:58:23.440053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ce370 00:16:37.336 [2024-07-25 11:58:23.440064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.336 [2024-07-25 11:58:23.441577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.336 [2024-07-25 11:58:23.441611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.336 [2024-07-25 11:58:23.441671] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:37.336 [2024-07-25 11:58:23.441697] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.336 [2024-07-25 11:58:23.441787] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:37.336 [2024-07-25 11:58:23.441799] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.336 [2024-07-25 11:58:23.441812] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23cede0 name raid_bdev1, state configuring 00:16:37.336 [2024-07-25 11:58:23.441833] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.336 pt1 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.595 "name": "raid_bdev1", 00:16:37.595 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:37.595 "strip_size_kb": 0, 00:16:37.595 "state": "configuring", 00:16:37.595 "raid_level": "raid1", 00:16:37.595 "superblock": true, 00:16:37.595 "num_base_bdevs": 3, 00:16:37.595 "num_base_bdevs_discovered": 1, 00:16:37.595 "num_base_bdevs_operational": 2, 00:16:37.595 "base_bdevs_list": [ 00:16:37.595 { 00:16:37.595 "name": null, 00:16:37.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.595 "is_configured": false, 00:16:37.595 "data_offset": 2048, 00:16:37.595 "data_size": 63488 00:16:37.595 }, 00:16:37.595 { 00:16:37.595 "name": "pt2", 00:16:37.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.595 "is_configured": true, 00:16:37.595 "data_offset": 2048, 00:16:37.595 "data_size": 63488 00:16:37.595 }, 00:16:37.595 { 00:16:37.595 "name": null, 00:16:37.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.595 "is_configured": false, 00:16:37.595 "data_offset": 2048, 00:16:37.595 "data_size": 63488 00:16:37.595 } 00:16:37.595 ] 00:16:37.595 }' 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.595 11:58:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.163 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:38.163 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:38.422 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:16:38.422 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.681 [2024-07-25 11:58:24.699317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.681 [2024-07-25 11:58:24.699367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.681 [2024-07-25 11:58:24.699385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2231290 00:16:38.681 [2024-07-25 11:58:24.699397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.681 [2024-07-25 11:58:24.699706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.681 [2024-07-25 11:58:24.699722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.681 [2024-07-25 11:58:24.699780] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:38.681 [2024-07-25 11:58:24.699798] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.681 [2024-07-25 11:58:24.699887] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x23cf6d0 00:16:38.681 [2024-07-25 11:58:24.699896] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.681 [2024-07-25 11:58:24.700043] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2236f40 00:16:38.681 [2024-07-25 11:58:24.700188] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23cf6d0 00:16:38.681 [2024-07-25 11:58:24.700201] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23cf6d0 00:16:38.681 [2024-07-25 11:58:24.700293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.681 pt3 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.681 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.940 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.940 "name": "raid_bdev1", 00:16:38.940 "uuid": "19631f40-7e36-4d0e-82cd-8ad958269584", 00:16:38.940 "strip_size_kb": 0, 00:16:38.940 "state": "online", 00:16:38.940 "raid_level": "raid1", 00:16:38.940 "superblock": true, 00:16:38.940 "num_base_bdevs": 3, 00:16:38.940 "num_base_bdevs_discovered": 2, 00:16:38.940 "num_base_bdevs_operational": 2, 00:16:38.940 "base_bdevs_list": [ 00:16:38.940 { 00:16:38.940 "name": null, 00:16:38.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.940 "is_configured": false, 00:16:38.940 "data_offset": 2048, 00:16:38.940 "data_size": 63488 00:16:38.940 }, 00:16:38.940 { 00:16:38.940 "name": "pt2", 00:16:38.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.940 "is_configured": true, 00:16:38.940 "data_offset": 2048, 00:16:38.940 "data_size": 63488 00:16:38.940 }, 00:16:38.940 { 00:16:38.940 "name": "pt3", 00:16:38.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.940 "is_configured": true, 00:16:38.940 "data_offset": 2048, 00:16:38.940 "data_size": 63488 00:16:38.940 } 00:16:38.940 ] 00:16:38.940 }' 00:16:38.940 11:58:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.940 11:58:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.506 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:39.506 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:39.765 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:39.765 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:39.765 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:40.024 [2024-07-25 11:58:25.934793] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 19631f40-7e36-4d0e-82cd-8ad958269584 '!=' 19631f40-7e36-4d0e-82cd-8ad958269584 ']' 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4159203 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4159203 ']' 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4159203 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.024 11:58:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4159203 00:16:40.024 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.024 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.024 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4159203' 00:16:40.024 killing process with pid 4159203 00:16:40.024 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4159203 00:16:40.024 [2024-07-25 11:58:26.009576] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.024 [2024-07-25 11:58:26.009627] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.024 [2024-07-25 11:58:26.009675] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.024 [2024-07-25 11:58:26.009685] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23cf6d0 name raid_bdev1, state offline 00:16:40.024 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4159203 00:16:40.024 [2024-07-25 11:58:26.033608] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.284 11:58:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:40.284 00:16:40.284 real 0m20.390s 00:16:40.284 user 0m37.217s 00:16:40.284 sys 0m3.783s 00:16:40.284 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.284 11:58:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.284 ************************************ 00:16:40.284 END TEST raid_superblock_test 00:16:40.284 ************************************ 00:16:40.284 11:58:26 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:16:40.284 11:58:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:40.284 11:58:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:40.284 11:58:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.284 ************************************ 00:16:40.284 START TEST raid_read_error_test 00:16:40.284 ************************************ 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.4LprKRWsfW 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4163066 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4163066 /var/tmp/spdk-raid.sock 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4163066 ']' 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:40.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.284 11:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.284 [2024-07-25 11:58:26.387384] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:16:40.284 [2024-07-25 11:58:26.387442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163066 ] 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:40.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:40.543 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:40.544 [2024-07-25 11:58:26.520461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.544 [2024-07-25 11:58:26.607071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.802 [2024-07-25 11:58:26.672993] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.802 [2024-07-25 11:58:26.673033] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.409 11:58:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.409 11:58:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:41.409 11:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:41.409 11:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:41.683 BaseBdev1_malloc 00:16:41.684 11:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:41.684 true 00:16:41.684 11:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:41.943 [2024-07-25 11:58:27.959698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:41.943 [2024-07-25 11:58:27.959743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.943 [2024-07-25 11:58:27.959760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225f190 00:16:41.943 [2024-07-25 11:58:27.959772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.943 [2024-07-25 11:58:27.961360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.943 [2024-07-25 11:58:27.961388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:41.943 BaseBdev1 00:16:41.943 11:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:41.943 11:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.201 BaseBdev2_malloc 00:16:42.201 11:58:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:42.460 true 00:16:42.460 11:58:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:42.719 [2024-07-25 11:58:28.645851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:42.719 [2024-07-25 11:58:28.645888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.719 [2024-07-25 11:58:28.645906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2263e20 00:16:42.719 [2024-07-25 11:58:28.645917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.719 [2024-07-25 11:58:28.647293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.719 [2024-07-25 11:58:28.647319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.719 BaseBdev2 00:16:42.719 11:58:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:42.719 11:58:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.977 BaseBdev3_malloc 00:16:42.977 11:58:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:43.235 true 00:16:43.235 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:43.235 [2024-07-25 11:58:29.315940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:43.235 [2024-07-25 11:58:29.315977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.235 [2024-07-25 11:58:29.315997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2264d90 00:16:43.235 [2024-07-25 11:58:29.316009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.235 [2024-07-25 11:58:29.317325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.235 [2024-07-25 11:58:29.317351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:43.235 BaseBdev3 00:16:43.235 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:43.494 [2024-07-25 11:58:29.540562] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.494 [2024-07-25 11:58:29.541737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.494 [2024-07-25 11:58:29.541801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.494 [2024-07-25 11:58:29.541992] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2266ba0 00:16:43.494 [2024-07-25 11:58:29.542007] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:43.494 [2024-07-25 11:58:29.542189] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2266820 00:16:43.494 [2024-07-25 11:58:29.542332] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2266ba0 00:16:43.494 [2024-07-25 11:58:29.542342] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2266ba0 00:16:43.494 [2024-07-25 11:58:29.542437] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.494 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.753 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.753 "name": "raid_bdev1", 00:16:43.753 "uuid": "e5b79d7f-1a7a-45e5-8310-2d588ebc2c55", 00:16:43.753 "strip_size_kb": 0, 00:16:43.753 "state": "online", 00:16:43.753 "raid_level": "raid1", 00:16:43.753 "superblock": true, 00:16:43.753 "num_base_bdevs": 3, 00:16:43.753 "num_base_bdevs_discovered": 3, 00:16:43.753 "num_base_bdevs_operational": 3, 00:16:43.753 "base_bdevs_list": [ 00:16:43.753 { 00:16:43.753 "name": "BaseBdev1", 00:16:43.753 "uuid": "d4552b2f-8552-5b02-90ea-05df15e53ba0", 00:16:43.753 "is_configured": true, 00:16:43.753 "data_offset": 2048, 00:16:43.753 "data_size": 63488 00:16:43.753 }, 00:16:43.753 { 00:16:43.753 "name": "BaseBdev2", 00:16:43.753 "uuid": "89f65704-e2bb-53b9-9de7-87760d513a8e", 00:16:43.753 "is_configured": true, 00:16:43.753 "data_offset": 2048, 00:16:43.753 "data_size": 63488 00:16:43.753 }, 00:16:43.753 { 00:16:43.753 "name": "BaseBdev3", 00:16:43.753 "uuid": "897c33d4-2d73-548d-9cbd-0e20bce537b7", 00:16:43.753 "is_configured": true, 00:16:43.753 "data_offset": 2048, 00:16:43.753 "data_size": 63488 00:16:43.753 } 00:16:43.753 ] 00:16:43.753 }' 00:16:43.753 11:58:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.753 11:58:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.320 11:58:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:44.320 11:58:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:44.578 [2024-07-25 11:58:30.455213] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x226b690 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.512 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.770 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.770 "name": "raid_bdev1", 00:16:45.770 "uuid": "e5b79d7f-1a7a-45e5-8310-2d588ebc2c55", 00:16:45.770 "strip_size_kb": 0, 00:16:45.770 "state": "online", 00:16:45.770 "raid_level": "raid1", 00:16:45.770 "superblock": true, 00:16:45.770 "num_base_bdevs": 3, 00:16:45.770 "num_base_bdevs_discovered": 3, 00:16:45.770 "num_base_bdevs_operational": 3, 00:16:45.770 "base_bdevs_list": [ 00:16:45.770 { 00:16:45.770 "name": "BaseBdev1", 00:16:45.770 "uuid": "d4552b2f-8552-5b02-90ea-05df15e53ba0", 00:16:45.770 "is_configured": true, 00:16:45.770 "data_offset": 2048, 00:16:45.770 "data_size": 63488 00:16:45.770 }, 00:16:45.770 { 00:16:45.770 "name": "BaseBdev2", 00:16:45.770 "uuid": "89f65704-e2bb-53b9-9de7-87760d513a8e", 00:16:45.770 "is_configured": true, 00:16:45.770 "data_offset": 2048, 00:16:45.770 "data_size": 63488 00:16:45.770 }, 00:16:45.770 { 00:16:45.770 "name": "BaseBdev3", 00:16:45.770 "uuid": "897c33d4-2d73-548d-9cbd-0e20bce537b7", 00:16:45.770 "is_configured": true, 00:16:45.770 "data_offset": 2048, 00:16:45.770 "data_size": 63488 00:16:45.770 } 00:16:45.770 ] 00:16:45.770 }' 00:16:45.770 11:58:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.770 11:58:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.338 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:46.596 [2024-07-25 11:58:32.510403] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.596 [2024-07-25 11:58:32.510439] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.596 [2024-07-25 11:58:32.513398] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.596 [2024-07-25 11:58:32.513431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.596 [2024-07-25 11:58:32.513521] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.596 [2024-07-25 11:58:32.513532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2266ba0 name raid_bdev1, state offline 00:16:46.596 0 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4163066 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4163066 ']' 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4163066 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4163066 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4163066' 00:16:46.596 killing process with pid 4163066 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4163066 00:16:46.596 [2024-07-25 11:58:32.587970] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.596 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4163066 00:16:46.596 [2024-07-25 11:58:32.606740] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.4LprKRWsfW 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:46.855 00:16:46.855 real 0m6.502s 00:16:46.855 user 0m10.157s 00:16:46.855 sys 0m1.181s 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.855 11:58:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 ************************************ 00:16:46.855 END TEST raid_read_error_test 00:16:46.855 ************************************ 00:16:46.855 11:58:32 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:16:46.855 11:58:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:46.855 11:58:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.855 11:58:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 ************************************ 00:16:46.855 START TEST raid_write_error_test 00:16:46.855 ************************************ 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:46.855 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.iNO67bFdMq 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4164332 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4164332 /var/tmp/spdk-raid.sock 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4164332 ']' 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.856 11:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.115 [2024-07-25 11:58:32.975872] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:16:47.115 [2024-07-25 11:58:32.975933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164332 ] 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:47.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.115 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:47.115 [2024-07-25 11:58:33.109143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.115 [2024-07-25 11:58:33.191383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.379 [2024-07-25 11:58:33.252925] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.379 [2024-07-25 11:58:33.252961] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.946 11:58:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.946 11:58:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:47.946 11:58:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:47.946 11:58:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:48.204 BaseBdev1_malloc 00:16:48.204 11:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:48.461 true 00:16:48.461 11:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:48.461 [2024-07-25 11:58:34.549839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:48.461 [2024-07-25 11:58:34.549882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.461 [2024-07-25 11:58:34.549899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e7190 00:16:48.461 [2024-07-25 11:58:34.549911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.461 [2024-07-25 11:58:34.551398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.461 [2024-07-25 11:58:34.551424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:48.461 BaseBdev1 00:16:48.461 11:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:48.461 11:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:48.719 BaseBdev2_malloc 00:16:48.719 11:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:48.976 true 00:16:48.976 11:58:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:49.234 [2024-07-25 11:58:35.239755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:49.234 [2024-07-25 11:58:35.239795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.234 [2024-07-25 11:58:35.239813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ebe20 00:16:49.234 [2024-07-25 11:58:35.239824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.234 [2024-07-25 11:58:35.241102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.234 [2024-07-25 11:58:35.241129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.234 BaseBdev2 00:16:49.234 11:58:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:49.234 11:58:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:49.492 BaseBdev3_malloc 00:16:49.492 11:58:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:49.750 true 00:16:49.750 11:58:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:50.007 [2024-07-25 11:58:35.929961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:50.007 [2024-07-25 11:58:35.930004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.007 [2024-07-25 11:58:35.930025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ecd90 00:16:50.007 [2024-07-25 11:58:35.930036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.007 [2024-07-25 11:58:35.931382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.007 [2024-07-25 11:58:35.931410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:50.007 BaseBdev3 00:16:50.008 11:58:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:50.266 [2024-07-25 11:58:36.158587] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.266 [2024-07-25 11:58:36.159672] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.266 [2024-07-25 11:58:36.159735] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.266 [2024-07-25 11:58:36.159919] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x20eeba0 00:16:50.266 [2024-07-25 11:58:36.159930] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:50.266 [2024-07-25 11:58:36.160096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20ee820 00:16:50.266 [2024-07-25 11:58:36.160246] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20eeba0 00:16:50.266 [2024-07-25 11:58:36.160256] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20eeba0 00:16:50.266 [2024-07-25 11:58:36.160346] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.266 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.524 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.524 "name": "raid_bdev1", 00:16:50.524 "uuid": "05b10625-62bf-48e1-b300-872be6877613", 00:16:50.524 "strip_size_kb": 0, 00:16:50.524 "state": "online", 00:16:50.524 "raid_level": "raid1", 00:16:50.524 "superblock": true, 00:16:50.524 "num_base_bdevs": 3, 00:16:50.524 "num_base_bdevs_discovered": 3, 00:16:50.524 "num_base_bdevs_operational": 3, 00:16:50.524 "base_bdevs_list": [ 00:16:50.524 { 00:16:50.525 "name": "BaseBdev1", 00:16:50.525 "uuid": "4022b522-2274-530f-9d05-da60237ac2ac", 00:16:50.525 "is_configured": true, 00:16:50.525 "data_offset": 2048, 00:16:50.525 "data_size": 63488 00:16:50.525 }, 00:16:50.525 { 00:16:50.525 "name": "BaseBdev2", 00:16:50.525 "uuid": "2ec02dd8-cc0e-57c7-b174-c8a2cc7828b6", 00:16:50.525 "is_configured": true, 00:16:50.525 "data_offset": 2048, 00:16:50.525 "data_size": 63488 00:16:50.525 }, 00:16:50.525 { 00:16:50.525 "name": "BaseBdev3", 00:16:50.525 "uuid": "3516a3d6-1586-5cc4-b95a-1041af3398bd", 00:16:50.525 "is_configured": true, 00:16:50.525 "data_offset": 2048, 00:16:50.525 "data_size": 63488 00:16:50.525 } 00:16:50.525 ] 00:16:50.525 }' 00:16:50.525 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.525 11:58:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.091 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:51.091 11:58:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:51.091 [2024-07-25 11:58:37.085276] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20f3690 00:16:52.027 11:58:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:52.285 [2024-07-25 11:58:38.191213] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:52.285 [2024-07-25 11:58:38.191270] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.285 [2024-07-25 11:58:38.191460] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x20f3690 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.285 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.543 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.543 "name": "raid_bdev1", 00:16:52.544 "uuid": "05b10625-62bf-48e1-b300-872be6877613", 00:16:52.544 "strip_size_kb": 0, 00:16:52.544 "state": "online", 00:16:52.544 "raid_level": "raid1", 00:16:52.544 "superblock": true, 00:16:52.544 "num_base_bdevs": 3, 00:16:52.544 "num_base_bdevs_discovered": 2, 00:16:52.544 "num_base_bdevs_operational": 2, 00:16:52.544 "base_bdevs_list": [ 00:16:52.544 { 00:16:52.544 "name": null, 00:16:52.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.544 "is_configured": false, 00:16:52.544 "data_offset": 2048, 00:16:52.544 "data_size": 63488 00:16:52.544 }, 00:16:52.544 { 00:16:52.544 "name": "BaseBdev2", 00:16:52.544 "uuid": "2ec02dd8-cc0e-57c7-b174-c8a2cc7828b6", 00:16:52.544 "is_configured": true, 00:16:52.544 "data_offset": 2048, 00:16:52.544 "data_size": 63488 00:16:52.544 }, 00:16:52.544 { 00:16:52.544 "name": "BaseBdev3", 00:16:52.544 "uuid": "3516a3d6-1586-5cc4-b95a-1041af3398bd", 00:16:52.544 "is_configured": true, 00:16:52.544 "data_offset": 2048, 00:16:52.544 "data_size": 63488 00:16:52.544 } 00:16:52.544 ] 00:16:52.544 }' 00:16:52.544 11:58:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.544 11:58:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:53.368 [2024-07-25 11:58:39.236715] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.368 [2024-07-25 11:58:39.236745] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.368 [2024-07-25 11:58:39.239678] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.368 [2024-07-25 11:58:39.239708] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.368 [2024-07-25 11:58:39.239775] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.368 [2024-07-25 11:58:39.239785] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20eeba0 name raid_bdev1, state offline 00:16:53.368 0 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4164332 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4164332 ']' 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4164332 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4164332 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4164332' 00:16:53.368 killing process with pid 4164332 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4164332 00:16:53.368 [2024-07-25 11:58:39.310656] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.368 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4164332 00:16:53.368 [2024-07-25 11:58:39.329976] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.iNO67bFdMq 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:53.627 00:16:53.627 real 0m6.642s 00:16:53.627 user 0m10.448s 00:16:53.627 sys 0m1.200s 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.627 11:58:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.627 ************************************ 00:16:53.627 END TEST raid_write_error_test 00:16:53.627 ************************************ 00:16:53.627 11:58:39 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:53.627 11:58:39 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:53.627 11:58:39 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:53.627 11:58:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:53.627 11:58:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.627 11:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.627 ************************************ 00:16:53.627 START TEST raid_state_function_test 00:16:53.627 ************************************ 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4165535 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4165535' 00:16:53.627 Process raid pid: 4165535 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4165535 /var/tmp/spdk-raid.sock 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4165535 ']' 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.627 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.627 [2024-07-25 11:58:39.702721] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:16:53.627 [2024-07-25 11:58:39.702780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:53.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.886 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:53.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.887 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:53.887 [2024-07-25 11:58:39.825683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.887 [2024-07-25 11:58:39.911634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.887 [2024-07-25 11:58:39.968802] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.887 [2024-07-25 11:58:39.968833] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:54.891 [2024-07-25 11:58:40.811026] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.891 [2024-07-25 11:58:40.811060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.891 [2024-07-25 11:58:40.811069] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.891 [2024-07-25 11:58:40.811080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.891 [2024-07-25 11:58:40.811088] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.891 [2024-07-25 11:58:40.811098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.891 [2024-07-25 11:58:40.811106] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.891 [2024-07-25 11:58:40.811115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.891 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.156 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.156 "name": "Existed_Raid", 00:16:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.156 "strip_size_kb": 64, 00:16:55.156 "state": "configuring", 00:16:55.156 "raid_level": "raid0", 00:16:55.156 "superblock": false, 00:16:55.156 "num_base_bdevs": 4, 00:16:55.156 "num_base_bdevs_discovered": 0, 00:16:55.156 "num_base_bdevs_operational": 4, 00:16:55.156 "base_bdevs_list": [ 00:16:55.156 { 00:16:55.156 "name": "BaseBdev1", 00:16:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.156 "is_configured": false, 00:16:55.156 "data_offset": 0, 00:16:55.156 "data_size": 0 00:16:55.156 }, 00:16:55.156 { 00:16:55.156 "name": "BaseBdev2", 00:16:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.156 "is_configured": false, 00:16:55.156 "data_offset": 0, 00:16:55.156 "data_size": 0 00:16:55.156 }, 00:16:55.156 { 00:16:55.156 "name": "BaseBdev3", 00:16:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.156 "is_configured": false, 00:16:55.156 "data_offset": 0, 00:16:55.156 "data_size": 0 00:16:55.156 }, 00:16:55.156 { 00:16:55.156 "name": "BaseBdev4", 00:16:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.156 "is_configured": false, 00:16:55.156 "data_offset": 0, 00:16:55.156 "data_size": 0 00:16:55.156 } 00:16:55.156 ] 00:16:55.156 }' 00:16:55.156 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.156 11:58:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.723 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.723 [2024-07-25 11:58:41.829573] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.723 [2024-07-25 11:58:41.829600] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20dff60 name Existed_Raid, state configuring 00:16:55.981 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:55.981 [2024-07-25 11:58:42.058191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.981 [2024-07-25 11:58:42.058214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.981 [2024-07-25 11:58:42.058223] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.981 [2024-07-25 11:58:42.058233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.981 [2024-07-25 11:58:42.058241] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.981 [2024-07-25 11:58:42.058252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.981 [2024-07-25 11:58:42.058260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:55.981 [2024-07-25 11:58:42.058272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:55.981 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.240 [2024-07-25 11:58:42.292167] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.240 BaseBdev1 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:56.240 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.498 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.757 [ 00:16:56.757 { 00:16:56.757 "name": "BaseBdev1", 00:16:56.757 "aliases": [ 00:16:56.757 "b08bafec-fbf6-4301-84d2-a17f8873b752" 00:16:56.757 ], 00:16:56.757 "product_name": "Malloc disk", 00:16:56.757 "block_size": 512, 00:16:56.757 "num_blocks": 65536, 00:16:56.757 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:16:56.757 "assigned_rate_limits": { 00:16:56.757 "rw_ios_per_sec": 0, 00:16:56.757 "rw_mbytes_per_sec": 0, 00:16:56.757 "r_mbytes_per_sec": 0, 00:16:56.757 "w_mbytes_per_sec": 0 00:16:56.757 }, 00:16:56.757 "claimed": true, 00:16:56.757 "claim_type": "exclusive_write", 00:16:56.757 "zoned": false, 00:16:56.757 "supported_io_types": { 00:16:56.757 "read": true, 00:16:56.757 "write": true, 00:16:56.757 "unmap": true, 00:16:56.757 "flush": true, 00:16:56.757 "reset": true, 00:16:56.757 "nvme_admin": false, 00:16:56.757 "nvme_io": false, 00:16:56.757 "nvme_io_md": false, 00:16:56.757 "write_zeroes": true, 00:16:56.757 "zcopy": true, 00:16:56.757 "get_zone_info": false, 00:16:56.757 "zone_management": false, 00:16:56.757 "zone_append": false, 00:16:56.757 "compare": false, 00:16:56.757 "compare_and_write": false, 00:16:56.757 "abort": true, 00:16:56.757 "seek_hole": false, 00:16:56.757 "seek_data": false, 00:16:56.757 "copy": true, 00:16:56.757 "nvme_iov_md": false 00:16:56.757 }, 00:16:56.757 "memory_domains": [ 00:16:56.757 { 00:16:56.757 "dma_device_id": "system", 00:16:56.757 "dma_device_type": 1 00:16:56.757 }, 00:16:56.757 { 00:16:56.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.757 "dma_device_type": 2 00:16:56.757 } 00:16:56.757 ], 00:16:56.757 "driver_specific": {} 00:16:56.757 } 00:16:56.757 ] 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.757 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.016 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.016 "name": "Existed_Raid", 00:16:57.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.016 "strip_size_kb": 64, 00:16:57.016 "state": "configuring", 00:16:57.016 "raid_level": "raid0", 00:16:57.016 "superblock": false, 00:16:57.016 "num_base_bdevs": 4, 00:16:57.016 "num_base_bdevs_discovered": 1, 00:16:57.016 "num_base_bdevs_operational": 4, 00:16:57.016 "base_bdevs_list": [ 00:16:57.016 { 00:16:57.016 "name": "BaseBdev1", 00:16:57.016 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:16:57.016 "is_configured": true, 00:16:57.016 "data_offset": 0, 00:16:57.016 "data_size": 65536 00:16:57.016 }, 00:16:57.016 { 00:16:57.016 "name": "BaseBdev2", 00:16:57.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.016 "is_configured": false, 00:16:57.016 "data_offset": 0, 00:16:57.016 "data_size": 0 00:16:57.016 }, 00:16:57.016 { 00:16:57.016 "name": "BaseBdev3", 00:16:57.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.016 "is_configured": false, 00:16:57.016 "data_offset": 0, 00:16:57.016 "data_size": 0 00:16:57.016 }, 00:16:57.016 { 00:16:57.016 "name": "BaseBdev4", 00:16:57.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.016 "is_configured": false, 00:16:57.016 "data_offset": 0, 00:16:57.016 "data_size": 0 00:16:57.016 } 00:16:57.016 ] 00:16:57.016 }' 00:16:57.016 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.016 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.582 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:57.840 [2024-07-25 11:58:43.764032] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.840 [2024-07-25 11:58:43.764068] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20df7d0 name Existed_Raid, state configuring 00:16:57.840 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:58.099 [2024-07-25 11:58:43.988663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.099 [2024-07-25 11:58:43.990072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.099 [2024-07-25 11:58:43.990103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.099 [2024-07-25 11:58:43.990112] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.099 [2024-07-25 11:58:43.990122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:58.099 [2024-07-25 11:58:43.990130] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:58.099 [2024-07-25 11:58:43.990148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.099 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.357 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.357 "name": "Existed_Raid", 00:16:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.357 "strip_size_kb": 64, 00:16:58.357 "state": "configuring", 00:16:58.357 "raid_level": "raid0", 00:16:58.357 "superblock": false, 00:16:58.357 "num_base_bdevs": 4, 00:16:58.357 "num_base_bdevs_discovered": 1, 00:16:58.357 "num_base_bdevs_operational": 4, 00:16:58.357 "base_bdevs_list": [ 00:16:58.357 { 00:16:58.357 "name": "BaseBdev1", 00:16:58.357 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:16:58.357 "is_configured": true, 00:16:58.357 "data_offset": 0, 00:16:58.357 "data_size": 65536 00:16:58.357 }, 00:16:58.357 { 00:16:58.357 "name": "BaseBdev2", 00:16:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.357 "is_configured": false, 00:16:58.357 "data_offset": 0, 00:16:58.357 "data_size": 0 00:16:58.357 }, 00:16:58.357 { 00:16:58.357 "name": "BaseBdev3", 00:16:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.357 "is_configured": false, 00:16:58.357 "data_offset": 0, 00:16:58.357 "data_size": 0 00:16:58.357 }, 00:16:58.357 { 00:16:58.357 "name": "BaseBdev4", 00:16:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.357 "is_configured": false, 00:16:58.357 "data_offset": 0, 00:16:58.357 "data_size": 0 00:16:58.357 } 00:16:58.357 ] 00:16:58.357 }' 00:16:58.357 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.357 11:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.924 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:58.924 [2024-07-25 11:58:45.018400] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.924 BaseBdev2 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:58.924 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.182 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:59.441 [ 00:16:59.441 { 00:16:59.441 "name": "BaseBdev2", 00:16:59.441 "aliases": [ 00:16:59.441 "19efd086-4254-4499-9a15-2972be1da278" 00:16:59.441 ], 00:16:59.441 "product_name": "Malloc disk", 00:16:59.441 "block_size": 512, 00:16:59.441 "num_blocks": 65536, 00:16:59.441 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:16:59.441 "assigned_rate_limits": { 00:16:59.441 "rw_ios_per_sec": 0, 00:16:59.441 "rw_mbytes_per_sec": 0, 00:16:59.441 "r_mbytes_per_sec": 0, 00:16:59.441 "w_mbytes_per_sec": 0 00:16:59.441 }, 00:16:59.441 "claimed": true, 00:16:59.441 "claim_type": "exclusive_write", 00:16:59.441 "zoned": false, 00:16:59.441 "supported_io_types": { 00:16:59.441 "read": true, 00:16:59.441 "write": true, 00:16:59.441 "unmap": true, 00:16:59.441 "flush": true, 00:16:59.441 "reset": true, 00:16:59.441 "nvme_admin": false, 00:16:59.441 "nvme_io": false, 00:16:59.441 "nvme_io_md": false, 00:16:59.441 "write_zeroes": true, 00:16:59.441 "zcopy": true, 00:16:59.441 "get_zone_info": false, 00:16:59.441 "zone_management": false, 00:16:59.441 "zone_append": false, 00:16:59.441 "compare": false, 00:16:59.441 "compare_and_write": false, 00:16:59.441 "abort": true, 00:16:59.441 "seek_hole": false, 00:16:59.441 "seek_data": false, 00:16:59.441 "copy": true, 00:16:59.441 "nvme_iov_md": false 00:16:59.441 }, 00:16:59.441 "memory_domains": [ 00:16:59.441 { 00:16:59.441 "dma_device_id": "system", 00:16:59.441 "dma_device_type": 1 00:16:59.441 }, 00:16:59.441 { 00:16:59.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.441 "dma_device_type": 2 00:16:59.441 } 00:16:59.441 ], 00:16:59.441 "driver_specific": {} 00:16:59.441 } 00:16:59.441 ] 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.441 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.700 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.700 "name": "Existed_Raid", 00:16:59.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.700 "strip_size_kb": 64, 00:16:59.700 "state": "configuring", 00:16:59.700 "raid_level": "raid0", 00:16:59.700 "superblock": false, 00:16:59.700 "num_base_bdevs": 4, 00:16:59.700 "num_base_bdevs_discovered": 2, 00:16:59.700 "num_base_bdevs_operational": 4, 00:16:59.700 "base_bdevs_list": [ 00:16:59.700 { 00:16:59.700 "name": "BaseBdev1", 00:16:59.700 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:16:59.700 "is_configured": true, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 65536 00:16:59.700 }, 00:16:59.700 { 00:16:59.700 "name": "BaseBdev2", 00:16:59.700 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:16:59.700 "is_configured": true, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 65536 00:16:59.700 }, 00:16:59.700 { 00:16:59.700 "name": "BaseBdev3", 00:16:59.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.700 "is_configured": false, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 0 00:16:59.700 }, 00:16:59.700 { 00:16:59.700 "name": "BaseBdev4", 00:16:59.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.700 "is_configured": false, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 0 00:16:59.700 } 00:16:59.700 ] 00:16:59.700 }' 00:16:59.700 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.700 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.279 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:00.537 [2024-07-25 11:58:46.509520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.537 BaseBdev3 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:00.537 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.796 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.054 [ 00:17:01.054 { 00:17:01.054 "name": "BaseBdev3", 00:17:01.054 "aliases": [ 00:17:01.054 "627fba54-ca85-4b74-86dd-41a93a437ad3" 00:17:01.054 ], 00:17:01.054 "product_name": "Malloc disk", 00:17:01.054 "block_size": 512, 00:17:01.054 "num_blocks": 65536, 00:17:01.054 "uuid": "627fba54-ca85-4b74-86dd-41a93a437ad3", 00:17:01.054 "assigned_rate_limits": { 00:17:01.054 "rw_ios_per_sec": 0, 00:17:01.054 "rw_mbytes_per_sec": 0, 00:17:01.054 "r_mbytes_per_sec": 0, 00:17:01.054 "w_mbytes_per_sec": 0 00:17:01.054 }, 00:17:01.054 "claimed": true, 00:17:01.054 "claim_type": "exclusive_write", 00:17:01.054 "zoned": false, 00:17:01.054 "supported_io_types": { 00:17:01.054 "read": true, 00:17:01.054 "write": true, 00:17:01.054 "unmap": true, 00:17:01.054 "flush": true, 00:17:01.054 "reset": true, 00:17:01.054 "nvme_admin": false, 00:17:01.054 "nvme_io": false, 00:17:01.054 "nvme_io_md": false, 00:17:01.054 "write_zeroes": true, 00:17:01.054 "zcopy": true, 00:17:01.054 "get_zone_info": false, 00:17:01.054 "zone_management": false, 00:17:01.054 "zone_append": false, 00:17:01.054 "compare": false, 00:17:01.054 "compare_and_write": false, 00:17:01.054 "abort": true, 00:17:01.054 "seek_hole": false, 00:17:01.054 "seek_data": false, 00:17:01.054 "copy": true, 00:17:01.054 "nvme_iov_md": false 00:17:01.054 }, 00:17:01.054 "memory_domains": [ 00:17:01.054 { 00:17:01.054 "dma_device_id": "system", 00:17:01.054 "dma_device_type": 1 00:17:01.054 }, 00:17:01.054 { 00:17:01.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.054 "dma_device_type": 2 00:17:01.054 } 00:17:01.054 ], 00:17:01.054 "driver_specific": {} 00:17:01.054 } 00:17:01.054 ] 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.054 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.313 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.313 "name": "Existed_Raid", 00:17:01.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.313 "strip_size_kb": 64, 00:17:01.313 "state": "configuring", 00:17:01.313 "raid_level": "raid0", 00:17:01.313 "superblock": false, 00:17:01.313 "num_base_bdevs": 4, 00:17:01.313 "num_base_bdevs_discovered": 3, 00:17:01.313 "num_base_bdevs_operational": 4, 00:17:01.313 "base_bdevs_list": [ 00:17:01.313 { 00:17:01.313 "name": "BaseBdev1", 00:17:01.313 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:17:01.313 "is_configured": true, 00:17:01.313 "data_offset": 0, 00:17:01.313 "data_size": 65536 00:17:01.313 }, 00:17:01.313 { 00:17:01.313 "name": "BaseBdev2", 00:17:01.313 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:17:01.313 "is_configured": true, 00:17:01.313 "data_offset": 0, 00:17:01.313 "data_size": 65536 00:17:01.313 }, 00:17:01.313 { 00:17:01.313 "name": "BaseBdev3", 00:17:01.313 "uuid": "627fba54-ca85-4b74-86dd-41a93a437ad3", 00:17:01.313 "is_configured": true, 00:17:01.313 "data_offset": 0, 00:17:01.313 "data_size": 65536 00:17:01.313 }, 00:17:01.313 { 00:17:01.313 "name": "BaseBdev4", 00:17:01.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.313 "is_configured": false, 00:17:01.313 "data_offset": 0, 00:17:01.313 "data_size": 0 00:17:01.313 } 00:17:01.313 ] 00:17:01.313 }' 00:17:01.313 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.313 11:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.880 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:01.880 [2024-07-25 11:58:47.992596] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:01.880 [2024-07-25 11:58:47.992628] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x20e0830 00:17:01.880 [2024-07-25 11:58:47.992636] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:01.880 [2024-07-25 11:58:47.992822] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20d9160 00:17:01.880 [2024-07-25 11:58:47.992935] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20e0830 00:17:01.880 [2024-07-25 11:58:47.992944] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20e0830 00:17:01.880 [2024-07-25 11:58:47.993088] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.880 BaseBdev4 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.138 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:02.397 [ 00:17:02.397 { 00:17:02.397 "name": "BaseBdev4", 00:17:02.397 "aliases": [ 00:17:02.397 "c8766232-5e64-46c9-9316-2c9ee577b8e2" 00:17:02.397 ], 00:17:02.397 "product_name": "Malloc disk", 00:17:02.397 "block_size": 512, 00:17:02.397 "num_blocks": 65536, 00:17:02.397 "uuid": "c8766232-5e64-46c9-9316-2c9ee577b8e2", 00:17:02.397 "assigned_rate_limits": { 00:17:02.397 "rw_ios_per_sec": 0, 00:17:02.397 "rw_mbytes_per_sec": 0, 00:17:02.397 "r_mbytes_per_sec": 0, 00:17:02.397 "w_mbytes_per_sec": 0 00:17:02.397 }, 00:17:02.397 "claimed": true, 00:17:02.397 "claim_type": "exclusive_write", 00:17:02.397 "zoned": false, 00:17:02.397 "supported_io_types": { 00:17:02.397 "read": true, 00:17:02.397 "write": true, 00:17:02.397 "unmap": true, 00:17:02.397 "flush": true, 00:17:02.397 "reset": true, 00:17:02.397 "nvme_admin": false, 00:17:02.397 "nvme_io": false, 00:17:02.397 "nvme_io_md": false, 00:17:02.397 "write_zeroes": true, 00:17:02.397 "zcopy": true, 00:17:02.397 "get_zone_info": false, 00:17:02.397 "zone_management": false, 00:17:02.397 "zone_append": false, 00:17:02.397 "compare": false, 00:17:02.397 "compare_and_write": false, 00:17:02.397 "abort": true, 00:17:02.397 "seek_hole": false, 00:17:02.397 "seek_data": false, 00:17:02.397 "copy": true, 00:17:02.397 "nvme_iov_md": false 00:17:02.397 }, 00:17:02.397 "memory_domains": [ 00:17:02.397 { 00:17:02.397 "dma_device_id": "system", 00:17:02.397 "dma_device_type": 1 00:17:02.397 }, 00:17:02.397 { 00:17:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.397 "dma_device_type": 2 00:17:02.397 } 00:17:02.397 ], 00:17:02.397 "driver_specific": {} 00:17:02.397 } 00:17:02.397 ] 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.397 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.656 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.656 "name": "Existed_Raid", 00:17:02.656 "uuid": "ff6e5b66-bef6-4a82-87df-1823d7054b91", 00:17:02.656 "strip_size_kb": 64, 00:17:02.656 "state": "online", 00:17:02.656 "raid_level": "raid0", 00:17:02.656 "superblock": false, 00:17:02.656 "num_base_bdevs": 4, 00:17:02.656 "num_base_bdevs_discovered": 4, 00:17:02.656 "num_base_bdevs_operational": 4, 00:17:02.656 "base_bdevs_list": [ 00:17:02.656 { 00:17:02.656 "name": "BaseBdev1", 00:17:02.656 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:17:02.656 "is_configured": true, 00:17:02.656 "data_offset": 0, 00:17:02.656 "data_size": 65536 00:17:02.656 }, 00:17:02.656 { 00:17:02.656 "name": "BaseBdev2", 00:17:02.656 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:17:02.656 "is_configured": true, 00:17:02.656 "data_offset": 0, 00:17:02.656 "data_size": 65536 00:17:02.656 }, 00:17:02.656 { 00:17:02.656 "name": "BaseBdev3", 00:17:02.656 "uuid": "627fba54-ca85-4b74-86dd-41a93a437ad3", 00:17:02.656 "is_configured": true, 00:17:02.656 "data_offset": 0, 00:17:02.656 "data_size": 65536 00:17:02.656 }, 00:17:02.656 { 00:17:02.656 "name": "BaseBdev4", 00:17:02.656 "uuid": "c8766232-5e64-46c9-9316-2c9ee577b8e2", 00:17:02.656 "is_configured": true, 00:17:02.656 "data_offset": 0, 00:17:02.656 "data_size": 65536 00:17:02.656 } 00:17:02.656 ] 00:17:02.656 }' 00:17:02.656 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.656 11:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:03.222 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:03.481 [2024-07-25 11:58:49.468778] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.481 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:03.481 "name": "Existed_Raid", 00:17:03.481 "aliases": [ 00:17:03.481 "ff6e5b66-bef6-4a82-87df-1823d7054b91" 00:17:03.481 ], 00:17:03.481 "product_name": "Raid Volume", 00:17:03.481 "block_size": 512, 00:17:03.481 "num_blocks": 262144, 00:17:03.481 "uuid": "ff6e5b66-bef6-4a82-87df-1823d7054b91", 00:17:03.481 "assigned_rate_limits": { 00:17:03.481 "rw_ios_per_sec": 0, 00:17:03.481 "rw_mbytes_per_sec": 0, 00:17:03.481 "r_mbytes_per_sec": 0, 00:17:03.481 "w_mbytes_per_sec": 0 00:17:03.481 }, 00:17:03.481 "claimed": false, 00:17:03.481 "zoned": false, 00:17:03.481 "supported_io_types": { 00:17:03.481 "read": true, 00:17:03.481 "write": true, 00:17:03.481 "unmap": true, 00:17:03.481 "flush": true, 00:17:03.481 "reset": true, 00:17:03.481 "nvme_admin": false, 00:17:03.481 "nvme_io": false, 00:17:03.481 "nvme_io_md": false, 00:17:03.481 "write_zeroes": true, 00:17:03.481 "zcopy": false, 00:17:03.481 "get_zone_info": false, 00:17:03.481 "zone_management": false, 00:17:03.481 "zone_append": false, 00:17:03.481 "compare": false, 00:17:03.481 "compare_and_write": false, 00:17:03.481 "abort": false, 00:17:03.481 "seek_hole": false, 00:17:03.481 "seek_data": false, 00:17:03.481 "copy": false, 00:17:03.481 "nvme_iov_md": false 00:17:03.481 }, 00:17:03.481 "memory_domains": [ 00:17:03.481 { 00:17:03.481 "dma_device_id": "system", 00:17:03.481 "dma_device_type": 1 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.481 "dma_device_type": 2 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "system", 00:17:03.481 "dma_device_type": 1 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.481 "dma_device_type": 2 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "system", 00:17:03.481 "dma_device_type": 1 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.481 "dma_device_type": 2 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "system", 00:17:03.481 "dma_device_type": 1 00:17:03.481 }, 00:17:03.481 { 00:17:03.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.481 "dma_device_type": 2 00:17:03.481 } 00:17:03.481 ], 00:17:03.481 "driver_specific": { 00:17:03.481 "raid": { 00:17:03.481 "uuid": "ff6e5b66-bef6-4a82-87df-1823d7054b91", 00:17:03.481 "strip_size_kb": 64, 00:17:03.481 "state": "online", 00:17:03.481 "raid_level": "raid0", 00:17:03.481 "superblock": false, 00:17:03.481 "num_base_bdevs": 4, 00:17:03.481 "num_base_bdevs_discovered": 4, 00:17:03.481 "num_base_bdevs_operational": 4, 00:17:03.481 "base_bdevs_list": [ 00:17:03.481 { 00:17:03.481 "name": "BaseBdev1", 00:17:03.482 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:17:03.482 "is_configured": true, 00:17:03.482 "data_offset": 0, 00:17:03.482 "data_size": 65536 00:17:03.482 }, 00:17:03.482 { 00:17:03.482 "name": "BaseBdev2", 00:17:03.482 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:17:03.482 "is_configured": true, 00:17:03.482 "data_offset": 0, 00:17:03.482 "data_size": 65536 00:17:03.482 }, 00:17:03.482 { 00:17:03.482 "name": "BaseBdev3", 00:17:03.482 "uuid": "627fba54-ca85-4b74-86dd-41a93a437ad3", 00:17:03.482 "is_configured": true, 00:17:03.482 "data_offset": 0, 00:17:03.482 "data_size": 65536 00:17:03.482 }, 00:17:03.482 { 00:17:03.482 "name": "BaseBdev4", 00:17:03.482 "uuid": "c8766232-5e64-46c9-9316-2c9ee577b8e2", 00:17:03.482 "is_configured": true, 00:17:03.482 "data_offset": 0, 00:17:03.482 "data_size": 65536 00:17:03.482 } 00:17:03.482 ] 00:17:03.482 } 00:17:03.482 } 00:17:03.482 }' 00:17:03.482 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.482 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:03.482 BaseBdev2 00:17:03.482 BaseBdev3 00:17:03.482 BaseBdev4' 00:17:03.482 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.482 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:03.482 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.740 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.740 "name": "BaseBdev1", 00:17:03.740 "aliases": [ 00:17:03.740 "b08bafec-fbf6-4301-84d2-a17f8873b752" 00:17:03.740 ], 00:17:03.740 "product_name": "Malloc disk", 00:17:03.740 "block_size": 512, 00:17:03.740 "num_blocks": 65536, 00:17:03.740 "uuid": "b08bafec-fbf6-4301-84d2-a17f8873b752", 00:17:03.741 "assigned_rate_limits": { 00:17:03.741 "rw_ios_per_sec": 0, 00:17:03.741 "rw_mbytes_per_sec": 0, 00:17:03.741 "r_mbytes_per_sec": 0, 00:17:03.741 "w_mbytes_per_sec": 0 00:17:03.741 }, 00:17:03.741 "claimed": true, 00:17:03.741 "claim_type": "exclusive_write", 00:17:03.741 "zoned": false, 00:17:03.741 "supported_io_types": { 00:17:03.741 "read": true, 00:17:03.741 "write": true, 00:17:03.741 "unmap": true, 00:17:03.741 "flush": true, 00:17:03.741 "reset": true, 00:17:03.741 "nvme_admin": false, 00:17:03.741 "nvme_io": false, 00:17:03.741 "nvme_io_md": false, 00:17:03.741 "write_zeroes": true, 00:17:03.741 "zcopy": true, 00:17:03.741 "get_zone_info": false, 00:17:03.741 "zone_management": false, 00:17:03.741 "zone_append": false, 00:17:03.741 "compare": false, 00:17:03.741 "compare_and_write": false, 00:17:03.741 "abort": true, 00:17:03.741 "seek_hole": false, 00:17:03.741 "seek_data": false, 00:17:03.741 "copy": true, 00:17:03.741 "nvme_iov_md": false 00:17:03.741 }, 00:17:03.741 "memory_domains": [ 00:17:03.741 { 00:17:03.741 "dma_device_id": "system", 00:17:03.741 "dma_device_type": 1 00:17:03.741 }, 00:17:03.741 { 00:17:03.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.741 "dma_device_type": 2 00:17:03.741 } 00:17:03.741 ], 00:17:03.741 "driver_specific": {} 00:17:03.741 }' 00:17:03.741 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.741 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.741 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:03.741 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.999 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.999 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.999 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.999 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:03.999 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.258 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.258 "name": "BaseBdev2", 00:17:04.258 "aliases": [ 00:17:04.258 "19efd086-4254-4499-9a15-2972be1da278" 00:17:04.258 ], 00:17:04.258 "product_name": "Malloc disk", 00:17:04.258 "block_size": 512, 00:17:04.258 "num_blocks": 65536, 00:17:04.258 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:17:04.258 "assigned_rate_limits": { 00:17:04.258 "rw_ios_per_sec": 0, 00:17:04.258 "rw_mbytes_per_sec": 0, 00:17:04.258 "r_mbytes_per_sec": 0, 00:17:04.258 "w_mbytes_per_sec": 0 00:17:04.258 }, 00:17:04.258 "claimed": true, 00:17:04.258 "claim_type": "exclusive_write", 00:17:04.258 "zoned": false, 00:17:04.258 "supported_io_types": { 00:17:04.258 "read": true, 00:17:04.258 "write": true, 00:17:04.258 "unmap": true, 00:17:04.258 "flush": true, 00:17:04.258 "reset": true, 00:17:04.258 "nvme_admin": false, 00:17:04.258 "nvme_io": false, 00:17:04.258 "nvme_io_md": false, 00:17:04.258 "write_zeroes": true, 00:17:04.258 "zcopy": true, 00:17:04.258 "get_zone_info": false, 00:17:04.258 "zone_management": false, 00:17:04.258 "zone_append": false, 00:17:04.258 "compare": false, 00:17:04.258 "compare_and_write": false, 00:17:04.258 "abort": true, 00:17:04.258 "seek_hole": false, 00:17:04.258 "seek_data": false, 00:17:04.258 "copy": true, 00:17:04.258 "nvme_iov_md": false 00:17:04.258 }, 00:17:04.258 "memory_domains": [ 00:17:04.258 { 00:17:04.258 "dma_device_id": "system", 00:17:04.258 "dma_device_type": 1 00:17:04.258 }, 00:17:04.258 { 00:17:04.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.258 "dma_device_type": 2 00:17:04.258 } 00:17:04.258 ], 00:17:04.258 "driver_specific": {} 00:17:04.258 }' 00:17:04.258 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.258 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.516 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.775 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.775 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.775 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:04.775 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.775 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.775 "name": "BaseBdev3", 00:17:04.775 "aliases": [ 00:17:04.775 "627fba54-ca85-4b74-86dd-41a93a437ad3" 00:17:04.775 ], 00:17:04.775 "product_name": "Malloc disk", 00:17:04.775 "block_size": 512, 00:17:04.775 "num_blocks": 65536, 00:17:04.775 "uuid": "627fba54-ca85-4b74-86dd-41a93a437ad3", 00:17:04.775 "assigned_rate_limits": { 00:17:04.775 "rw_ios_per_sec": 0, 00:17:04.775 "rw_mbytes_per_sec": 0, 00:17:04.775 "r_mbytes_per_sec": 0, 00:17:04.775 "w_mbytes_per_sec": 0 00:17:04.775 }, 00:17:04.775 "claimed": true, 00:17:04.775 "claim_type": "exclusive_write", 00:17:04.775 "zoned": false, 00:17:04.775 "supported_io_types": { 00:17:04.775 "read": true, 00:17:04.775 "write": true, 00:17:04.775 "unmap": true, 00:17:04.775 "flush": true, 00:17:04.775 "reset": true, 00:17:04.775 "nvme_admin": false, 00:17:04.775 "nvme_io": false, 00:17:04.775 "nvme_io_md": false, 00:17:04.775 "write_zeroes": true, 00:17:04.775 "zcopy": true, 00:17:04.775 "get_zone_info": false, 00:17:04.775 "zone_management": false, 00:17:04.775 "zone_append": false, 00:17:04.775 "compare": false, 00:17:04.775 "compare_and_write": false, 00:17:04.775 "abort": true, 00:17:04.775 "seek_hole": false, 00:17:04.775 "seek_data": false, 00:17:04.775 "copy": true, 00:17:04.775 "nvme_iov_md": false 00:17:04.775 }, 00:17:04.775 "memory_domains": [ 00:17:04.775 { 00:17:04.775 "dma_device_id": "system", 00:17:04.775 "dma_device_type": 1 00:17:04.775 }, 00:17:04.775 { 00:17:04.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.775 "dma_device_type": 2 00:17:04.775 } 00:17:04.775 ], 00:17:04.775 "driver_specific": {} 00:17:04.775 }' 00:17:05.033 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.033 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.033 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.033 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.033 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.033 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.033 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.033 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.033 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.033 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.292 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.292 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.292 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.292 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.292 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.550 "name": "BaseBdev4", 00:17:05.550 "aliases": [ 00:17:05.550 "c8766232-5e64-46c9-9316-2c9ee577b8e2" 00:17:05.550 ], 00:17:05.550 "product_name": "Malloc disk", 00:17:05.550 "block_size": 512, 00:17:05.550 "num_blocks": 65536, 00:17:05.550 "uuid": "c8766232-5e64-46c9-9316-2c9ee577b8e2", 00:17:05.550 "assigned_rate_limits": { 00:17:05.550 "rw_ios_per_sec": 0, 00:17:05.550 "rw_mbytes_per_sec": 0, 00:17:05.550 "r_mbytes_per_sec": 0, 00:17:05.550 "w_mbytes_per_sec": 0 00:17:05.550 }, 00:17:05.550 "claimed": true, 00:17:05.550 "claim_type": "exclusive_write", 00:17:05.550 "zoned": false, 00:17:05.550 "supported_io_types": { 00:17:05.550 "read": true, 00:17:05.550 "write": true, 00:17:05.550 "unmap": true, 00:17:05.550 "flush": true, 00:17:05.550 "reset": true, 00:17:05.550 "nvme_admin": false, 00:17:05.550 "nvme_io": false, 00:17:05.550 "nvme_io_md": false, 00:17:05.550 "write_zeroes": true, 00:17:05.550 "zcopy": true, 00:17:05.550 "get_zone_info": false, 00:17:05.550 "zone_management": false, 00:17:05.550 "zone_append": false, 00:17:05.550 "compare": false, 00:17:05.550 "compare_and_write": false, 00:17:05.550 "abort": true, 00:17:05.550 "seek_hole": false, 00:17:05.550 "seek_data": false, 00:17:05.550 "copy": true, 00:17:05.550 "nvme_iov_md": false 00:17:05.550 }, 00:17:05.550 "memory_domains": [ 00:17:05.550 { 00:17:05.550 "dma_device_id": "system", 00:17:05.550 "dma_device_type": 1 00:17:05.550 }, 00:17:05.550 { 00:17:05.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.550 "dma_device_type": 2 00:17:05.550 } 00:17:05.550 ], 00:17:05.550 "driver_specific": {} 00:17:05.550 }' 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.550 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.809 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.809 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.809 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.809 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.809 11:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:06.067 [2024-07-25 11:58:51.991179] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.067 [2024-07-25 11:58:51.991202] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.067 [2024-07-25 11:58:51.991243] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.067 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.068 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.068 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.326 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.326 "name": "Existed_Raid", 00:17:06.326 "uuid": "ff6e5b66-bef6-4a82-87df-1823d7054b91", 00:17:06.326 "strip_size_kb": 64, 00:17:06.326 "state": "offline", 00:17:06.326 "raid_level": "raid0", 00:17:06.326 "superblock": false, 00:17:06.326 "num_base_bdevs": 4, 00:17:06.326 "num_base_bdevs_discovered": 3, 00:17:06.326 "num_base_bdevs_operational": 3, 00:17:06.326 "base_bdevs_list": [ 00:17:06.326 { 00:17:06.326 "name": null, 00:17:06.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.326 "is_configured": false, 00:17:06.326 "data_offset": 0, 00:17:06.326 "data_size": 65536 00:17:06.326 }, 00:17:06.326 { 00:17:06.326 "name": "BaseBdev2", 00:17:06.326 "uuid": "19efd086-4254-4499-9a15-2972be1da278", 00:17:06.326 "is_configured": true, 00:17:06.326 "data_offset": 0, 00:17:06.326 "data_size": 65536 00:17:06.326 }, 00:17:06.326 { 00:17:06.326 "name": "BaseBdev3", 00:17:06.326 "uuid": "627fba54-ca85-4b74-86dd-41a93a437ad3", 00:17:06.326 "is_configured": true, 00:17:06.326 "data_offset": 0, 00:17:06.326 "data_size": 65536 00:17:06.326 }, 00:17:06.326 { 00:17:06.326 "name": "BaseBdev4", 00:17:06.326 "uuid": "c8766232-5e64-46c9-9316-2c9ee577b8e2", 00:17:06.326 "is_configured": true, 00:17:06.326 "data_offset": 0, 00:17:06.326 "data_size": 65536 00:17:06.326 } 00:17:06.326 ] 00:17:06.326 }' 00:17:06.326 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.326 11:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.893 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:06.893 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:06.893 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.893 11:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:07.151 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:07.151 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.151 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:07.151 [2024-07-25 11:58:53.251559] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.421 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:07.725 [2024-07-25 11:58:53.718747] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.725 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:07.725 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:07.725 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.725 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:08.030 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:08.030 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:08.030 11:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:08.288 [2024-07-25 11:58:54.185955] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:08.288 [2024-07-25 11:58:54.185993] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20e0830 name Existed_Raid, state offline 00:17:08.288 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:08.288 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:08.288 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.288 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:08.547 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:08.547 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:08.547 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:08.547 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:08.547 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:08.548 BaseBdev2 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:08.548 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.807 11:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:09.065 [ 00:17:09.065 { 00:17:09.065 "name": "BaseBdev2", 00:17:09.065 "aliases": [ 00:17:09.065 "940f4bb6-7314-4030-a66a-9355795825bd" 00:17:09.065 ], 00:17:09.065 "product_name": "Malloc disk", 00:17:09.065 "block_size": 512, 00:17:09.065 "num_blocks": 65536, 00:17:09.065 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:09.065 "assigned_rate_limits": { 00:17:09.065 "rw_ios_per_sec": 0, 00:17:09.065 "rw_mbytes_per_sec": 0, 00:17:09.065 "r_mbytes_per_sec": 0, 00:17:09.065 "w_mbytes_per_sec": 0 00:17:09.065 }, 00:17:09.065 "claimed": false, 00:17:09.065 "zoned": false, 00:17:09.065 "supported_io_types": { 00:17:09.065 "read": true, 00:17:09.065 "write": true, 00:17:09.065 "unmap": true, 00:17:09.065 "flush": true, 00:17:09.065 "reset": true, 00:17:09.065 "nvme_admin": false, 00:17:09.065 "nvme_io": false, 00:17:09.065 "nvme_io_md": false, 00:17:09.065 "write_zeroes": true, 00:17:09.065 "zcopy": true, 00:17:09.065 "get_zone_info": false, 00:17:09.065 "zone_management": false, 00:17:09.065 "zone_append": false, 00:17:09.065 "compare": false, 00:17:09.065 "compare_and_write": false, 00:17:09.065 "abort": true, 00:17:09.065 "seek_hole": false, 00:17:09.065 "seek_data": false, 00:17:09.065 "copy": true, 00:17:09.065 "nvme_iov_md": false 00:17:09.065 }, 00:17:09.065 "memory_domains": [ 00:17:09.065 { 00:17:09.065 "dma_device_id": "system", 00:17:09.065 "dma_device_type": 1 00:17:09.065 }, 00:17:09.065 { 00:17:09.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.065 "dma_device_type": 2 00:17:09.065 } 00:17:09.065 ], 00:17:09.065 "driver_specific": {} 00:17:09.065 } 00:17:09.065 ] 00:17:09.065 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:09.065 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:09.065 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:09.065 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:09.324 BaseBdev3 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:09.324 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.582 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:09.840 [ 00:17:09.840 { 00:17:09.840 "name": "BaseBdev3", 00:17:09.840 "aliases": [ 00:17:09.840 "9f966478-4e35-4cf2-95ca-f90e112fe21f" 00:17:09.840 ], 00:17:09.840 "product_name": "Malloc disk", 00:17:09.840 "block_size": 512, 00:17:09.840 "num_blocks": 65536, 00:17:09.840 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:09.840 "assigned_rate_limits": { 00:17:09.840 "rw_ios_per_sec": 0, 00:17:09.840 "rw_mbytes_per_sec": 0, 00:17:09.840 "r_mbytes_per_sec": 0, 00:17:09.840 "w_mbytes_per_sec": 0 00:17:09.840 }, 00:17:09.840 "claimed": false, 00:17:09.840 "zoned": false, 00:17:09.840 "supported_io_types": { 00:17:09.840 "read": true, 00:17:09.840 "write": true, 00:17:09.840 "unmap": true, 00:17:09.840 "flush": true, 00:17:09.840 "reset": true, 00:17:09.840 "nvme_admin": false, 00:17:09.840 "nvme_io": false, 00:17:09.840 "nvme_io_md": false, 00:17:09.840 "write_zeroes": true, 00:17:09.840 "zcopy": true, 00:17:09.840 "get_zone_info": false, 00:17:09.840 "zone_management": false, 00:17:09.840 "zone_append": false, 00:17:09.840 "compare": false, 00:17:09.840 "compare_and_write": false, 00:17:09.840 "abort": true, 00:17:09.840 "seek_hole": false, 00:17:09.840 "seek_data": false, 00:17:09.840 "copy": true, 00:17:09.840 "nvme_iov_md": false 00:17:09.840 }, 00:17:09.840 "memory_domains": [ 00:17:09.840 { 00:17:09.840 "dma_device_id": "system", 00:17:09.840 "dma_device_type": 1 00:17:09.840 }, 00:17:09.840 { 00:17:09.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.840 "dma_device_type": 2 00:17:09.840 } 00:17:09.840 ], 00:17:09.840 "driver_specific": {} 00:17:09.840 } 00:17:09.840 ] 00:17:09.840 11:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:09.840 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:09.840 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:09.840 11:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:10.098 BaseBdev4 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.098 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.356 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:10.356 [ 00:17:10.356 { 00:17:10.356 "name": "BaseBdev4", 00:17:10.356 "aliases": [ 00:17:10.356 "e83d80b7-d84e-4337-9ecd-511ae5120758" 00:17:10.356 ], 00:17:10.356 "product_name": "Malloc disk", 00:17:10.356 "block_size": 512, 00:17:10.356 "num_blocks": 65536, 00:17:10.356 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:10.356 "assigned_rate_limits": { 00:17:10.356 "rw_ios_per_sec": 0, 00:17:10.356 "rw_mbytes_per_sec": 0, 00:17:10.356 "r_mbytes_per_sec": 0, 00:17:10.356 "w_mbytes_per_sec": 0 00:17:10.356 }, 00:17:10.356 "claimed": false, 00:17:10.356 "zoned": false, 00:17:10.356 "supported_io_types": { 00:17:10.356 "read": true, 00:17:10.356 "write": true, 00:17:10.356 "unmap": true, 00:17:10.356 "flush": true, 00:17:10.356 "reset": true, 00:17:10.356 "nvme_admin": false, 00:17:10.356 "nvme_io": false, 00:17:10.356 "nvme_io_md": false, 00:17:10.356 "write_zeroes": true, 00:17:10.356 "zcopy": true, 00:17:10.356 "get_zone_info": false, 00:17:10.356 "zone_management": false, 00:17:10.356 "zone_append": false, 00:17:10.356 "compare": false, 00:17:10.356 "compare_and_write": false, 00:17:10.356 "abort": true, 00:17:10.356 "seek_hole": false, 00:17:10.356 "seek_data": false, 00:17:10.356 "copy": true, 00:17:10.356 "nvme_iov_md": false 00:17:10.356 }, 00:17:10.356 "memory_domains": [ 00:17:10.356 { 00:17:10.356 "dma_device_id": "system", 00:17:10.356 "dma_device_type": 1 00:17:10.356 }, 00:17:10.356 { 00:17:10.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.356 "dma_device_type": 2 00:17:10.356 } 00:17:10.356 ], 00:17:10.356 "driver_specific": {} 00:17:10.356 } 00:17:10.356 ] 00:17:10.356 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:10.356 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:10.356 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:10.356 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:10.614 [2024-07-25 11:58:56.675235] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.614 [2024-07-25 11:58:56.675273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.614 [2024-07-25 11:58:56.675292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.614 [2024-07-25 11:58:56.676544] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.614 [2024-07-25 11:58:56.676583] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.614 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.615 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.872 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.872 "name": "Existed_Raid", 00:17:10.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.872 "strip_size_kb": 64, 00:17:10.872 "state": "configuring", 00:17:10.872 "raid_level": "raid0", 00:17:10.872 "superblock": false, 00:17:10.872 "num_base_bdevs": 4, 00:17:10.872 "num_base_bdevs_discovered": 3, 00:17:10.872 "num_base_bdevs_operational": 4, 00:17:10.872 "base_bdevs_list": [ 00:17:10.872 { 00:17:10.872 "name": "BaseBdev1", 00:17:10.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.872 "is_configured": false, 00:17:10.872 "data_offset": 0, 00:17:10.872 "data_size": 0 00:17:10.872 }, 00:17:10.872 { 00:17:10.872 "name": "BaseBdev2", 00:17:10.872 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:10.872 "is_configured": true, 00:17:10.872 "data_offset": 0, 00:17:10.872 "data_size": 65536 00:17:10.872 }, 00:17:10.872 { 00:17:10.872 "name": "BaseBdev3", 00:17:10.872 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:10.872 "is_configured": true, 00:17:10.872 "data_offset": 0, 00:17:10.872 "data_size": 65536 00:17:10.872 }, 00:17:10.872 { 00:17:10.872 "name": "BaseBdev4", 00:17:10.872 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:10.872 "is_configured": true, 00:17:10.872 "data_offset": 0, 00:17:10.872 "data_size": 65536 00:17:10.872 } 00:17:10.872 ] 00:17:10.872 }' 00:17:10.872 11:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.872 11:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.437 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:11.695 [2024-07-25 11:58:57.673843] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.695 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.953 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.953 "name": "Existed_Raid", 00:17:11.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.953 "strip_size_kb": 64, 00:17:11.953 "state": "configuring", 00:17:11.953 "raid_level": "raid0", 00:17:11.953 "superblock": false, 00:17:11.953 "num_base_bdevs": 4, 00:17:11.953 "num_base_bdevs_discovered": 2, 00:17:11.953 "num_base_bdevs_operational": 4, 00:17:11.953 "base_bdevs_list": [ 00:17:11.953 { 00:17:11.953 "name": "BaseBdev1", 00:17:11.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.953 "is_configured": false, 00:17:11.953 "data_offset": 0, 00:17:11.953 "data_size": 0 00:17:11.953 }, 00:17:11.953 { 00:17:11.953 "name": null, 00:17:11.953 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:11.953 "is_configured": false, 00:17:11.953 "data_offset": 0, 00:17:11.953 "data_size": 65536 00:17:11.953 }, 00:17:11.953 { 00:17:11.953 "name": "BaseBdev3", 00:17:11.953 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:11.953 "is_configured": true, 00:17:11.953 "data_offset": 0, 00:17:11.953 "data_size": 65536 00:17:11.953 }, 00:17:11.953 { 00:17:11.953 "name": "BaseBdev4", 00:17:11.953 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:11.953 "is_configured": true, 00:17:11.953 "data_offset": 0, 00:17:11.953 "data_size": 65536 00:17:11.954 } 00:17:11.954 ] 00:17:11.954 }' 00:17:11.954 11:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.954 11:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.519 11:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.519 11:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:12.776 11:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:12.776 11:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:12.776 [2024-07-25 11:58:58.888120] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.776 BaseBdev1 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.034 11:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.034 11:58:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.293 [ 00:17:13.293 { 00:17:13.293 "name": "BaseBdev1", 00:17:13.293 "aliases": [ 00:17:13.293 "31bd3adb-6fe5-4399-9652-6985cff3e670" 00:17:13.293 ], 00:17:13.293 "product_name": "Malloc disk", 00:17:13.293 "block_size": 512, 00:17:13.293 "num_blocks": 65536, 00:17:13.293 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:13.293 "assigned_rate_limits": { 00:17:13.293 "rw_ios_per_sec": 0, 00:17:13.293 "rw_mbytes_per_sec": 0, 00:17:13.293 "r_mbytes_per_sec": 0, 00:17:13.293 "w_mbytes_per_sec": 0 00:17:13.293 }, 00:17:13.293 "claimed": true, 00:17:13.293 "claim_type": "exclusive_write", 00:17:13.293 "zoned": false, 00:17:13.293 "supported_io_types": { 00:17:13.293 "read": true, 00:17:13.293 "write": true, 00:17:13.293 "unmap": true, 00:17:13.293 "flush": true, 00:17:13.293 "reset": true, 00:17:13.293 "nvme_admin": false, 00:17:13.293 "nvme_io": false, 00:17:13.293 "nvme_io_md": false, 00:17:13.293 "write_zeroes": true, 00:17:13.293 "zcopy": true, 00:17:13.293 "get_zone_info": false, 00:17:13.293 "zone_management": false, 00:17:13.293 "zone_append": false, 00:17:13.293 "compare": false, 00:17:13.293 "compare_and_write": false, 00:17:13.293 "abort": true, 00:17:13.293 "seek_hole": false, 00:17:13.293 "seek_data": false, 00:17:13.293 "copy": true, 00:17:13.293 "nvme_iov_md": false 00:17:13.293 }, 00:17:13.293 "memory_domains": [ 00:17:13.293 { 00:17:13.293 "dma_device_id": "system", 00:17:13.293 "dma_device_type": 1 00:17:13.293 }, 00:17:13.293 { 00:17:13.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.293 "dma_device_type": 2 00:17:13.293 } 00:17:13.293 ], 00:17:13.293 "driver_specific": {} 00:17:13.293 } 00:17:13.293 ] 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.293 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.551 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.551 "name": "Existed_Raid", 00:17:13.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.551 "strip_size_kb": 64, 00:17:13.551 "state": "configuring", 00:17:13.551 "raid_level": "raid0", 00:17:13.551 "superblock": false, 00:17:13.551 "num_base_bdevs": 4, 00:17:13.551 "num_base_bdevs_discovered": 3, 00:17:13.551 "num_base_bdevs_operational": 4, 00:17:13.551 "base_bdevs_list": [ 00:17:13.551 { 00:17:13.551 "name": "BaseBdev1", 00:17:13.551 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:13.552 "is_configured": true, 00:17:13.552 "data_offset": 0, 00:17:13.552 "data_size": 65536 00:17:13.552 }, 00:17:13.552 { 00:17:13.552 "name": null, 00:17:13.552 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:13.552 "is_configured": false, 00:17:13.552 "data_offset": 0, 00:17:13.552 "data_size": 65536 00:17:13.552 }, 00:17:13.552 { 00:17:13.552 "name": "BaseBdev3", 00:17:13.552 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:13.552 "is_configured": true, 00:17:13.552 "data_offset": 0, 00:17:13.552 "data_size": 65536 00:17:13.552 }, 00:17:13.552 { 00:17:13.552 "name": "BaseBdev4", 00:17:13.552 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:13.552 "is_configured": true, 00:17:13.552 "data_offset": 0, 00:17:13.552 "data_size": 65536 00:17:13.552 } 00:17:13.552 ] 00:17:13.552 }' 00:17:13.552 11:58:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.552 11:58:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.117 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.117 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.375 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:14.375 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:14.634 [2024-07-25 11:59:00.612690] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.634 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.892 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.892 "name": "Existed_Raid", 00:17:14.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.892 "strip_size_kb": 64, 00:17:14.892 "state": "configuring", 00:17:14.892 "raid_level": "raid0", 00:17:14.892 "superblock": false, 00:17:14.892 "num_base_bdevs": 4, 00:17:14.892 "num_base_bdevs_discovered": 2, 00:17:14.892 "num_base_bdevs_operational": 4, 00:17:14.892 "base_bdevs_list": [ 00:17:14.892 { 00:17:14.892 "name": "BaseBdev1", 00:17:14.892 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:14.892 "is_configured": true, 00:17:14.892 "data_offset": 0, 00:17:14.892 "data_size": 65536 00:17:14.892 }, 00:17:14.892 { 00:17:14.892 "name": null, 00:17:14.892 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:14.892 "is_configured": false, 00:17:14.892 "data_offset": 0, 00:17:14.892 "data_size": 65536 00:17:14.892 }, 00:17:14.892 { 00:17:14.892 "name": null, 00:17:14.892 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:14.892 "is_configured": false, 00:17:14.892 "data_offset": 0, 00:17:14.892 "data_size": 65536 00:17:14.892 }, 00:17:14.892 { 00:17:14.892 "name": "BaseBdev4", 00:17:14.892 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:14.892 "is_configured": true, 00:17:14.892 "data_offset": 0, 00:17:14.892 "data_size": 65536 00:17:14.892 } 00:17:14.892 ] 00:17:14.892 }' 00:17:14.892 11:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.892 11:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.458 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:15.458 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:15.716 [2024-07-25 11:59:01.807870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.716 11:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.975 11:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.975 "name": "Existed_Raid", 00:17:15.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.975 "strip_size_kb": 64, 00:17:15.975 "state": "configuring", 00:17:15.975 "raid_level": "raid0", 00:17:15.975 "superblock": false, 00:17:15.975 "num_base_bdevs": 4, 00:17:15.975 "num_base_bdevs_discovered": 3, 00:17:15.975 "num_base_bdevs_operational": 4, 00:17:15.975 "base_bdevs_list": [ 00:17:15.975 { 00:17:15.975 "name": "BaseBdev1", 00:17:15.975 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:15.975 "is_configured": true, 00:17:15.975 "data_offset": 0, 00:17:15.975 "data_size": 65536 00:17:15.975 }, 00:17:15.975 { 00:17:15.975 "name": null, 00:17:15.975 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:15.975 "is_configured": false, 00:17:15.975 "data_offset": 0, 00:17:15.975 "data_size": 65536 00:17:15.975 }, 00:17:15.975 { 00:17:15.975 "name": "BaseBdev3", 00:17:15.975 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:15.975 "is_configured": true, 00:17:15.975 "data_offset": 0, 00:17:15.975 "data_size": 65536 00:17:15.975 }, 00:17:15.975 { 00:17:15.975 "name": "BaseBdev4", 00:17:15.975 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:15.975 "is_configured": true, 00:17:15.975 "data_offset": 0, 00:17:15.975 "data_size": 65536 00:17:15.975 } 00:17:15.975 ] 00:17:15.975 }' 00:17:15.975 11:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.975 11:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 11:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:16.541 11:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.799 11:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:16.799 11:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:17.058 [2024-07-25 11:59:02.999020] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.058 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.316 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.316 "name": "Existed_Raid", 00:17:17.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.316 "strip_size_kb": 64, 00:17:17.316 "state": "configuring", 00:17:17.316 "raid_level": "raid0", 00:17:17.316 "superblock": false, 00:17:17.316 "num_base_bdevs": 4, 00:17:17.316 "num_base_bdevs_discovered": 2, 00:17:17.316 "num_base_bdevs_operational": 4, 00:17:17.316 "base_bdevs_list": [ 00:17:17.316 { 00:17:17.316 "name": null, 00:17:17.316 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:17.316 "is_configured": false, 00:17:17.316 "data_offset": 0, 00:17:17.316 "data_size": 65536 00:17:17.316 }, 00:17:17.316 { 00:17:17.316 "name": null, 00:17:17.316 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:17.316 "is_configured": false, 00:17:17.316 "data_offset": 0, 00:17:17.316 "data_size": 65536 00:17:17.316 }, 00:17:17.316 { 00:17:17.316 "name": "BaseBdev3", 00:17:17.316 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:17.316 "is_configured": true, 00:17:17.316 "data_offset": 0, 00:17:17.316 "data_size": 65536 00:17:17.316 }, 00:17:17.316 { 00:17:17.316 "name": "BaseBdev4", 00:17:17.316 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:17.316 "is_configured": true, 00:17:17.316 "data_offset": 0, 00:17:17.316 "data_size": 65536 00:17:17.316 } 00:17:17.316 ] 00:17:17.316 }' 00:17:17.316 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.316 11:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.882 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.882 11:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:18.140 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:18.140 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:18.140 [2024-07-25 11:59:04.244467] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.399 "name": "Existed_Raid", 00:17:18.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.399 "strip_size_kb": 64, 00:17:18.399 "state": "configuring", 00:17:18.399 "raid_level": "raid0", 00:17:18.399 "superblock": false, 00:17:18.399 "num_base_bdevs": 4, 00:17:18.399 "num_base_bdevs_discovered": 3, 00:17:18.399 "num_base_bdevs_operational": 4, 00:17:18.399 "base_bdevs_list": [ 00:17:18.399 { 00:17:18.399 "name": null, 00:17:18.399 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:18.399 "is_configured": false, 00:17:18.399 "data_offset": 0, 00:17:18.399 "data_size": 65536 00:17:18.399 }, 00:17:18.399 { 00:17:18.399 "name": "BaseBdev2", 00:17:18.399 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:18.399 "is_configured": true, 00:17:18.399 "data_offset": 0, 00:17:18.399 "data_size": 65536 00:17:18.399 }, 00:17:18.399 { 00:17:18.399 "name": "BaseBdev3", 00:17:18.399 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:18.399 "is_configured": true, 00:17:18.399 "data_offset": 0, 00:17:18.399 "data_size": 65536 00:17:18.399 }, 00:17:18.399 { 00:17:18.399 "name": "BaseBdev4", 00:17:18.399 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:18.399 "is_configured": true, 00:17:18.399 "data_offset": 0, 00:17:18.399 "data_size": 65536 00:17:18.399 } 00:17:18.399 ] 00:17:18.399 }' 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.399 11:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.968 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.968 11:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:19.228 11:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:19.228 11:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.228 11:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 31bd3adb-6fe5-4399-9652-6985cff3e670 00:17:19.488 [2024-07-25 11:59:05.579062] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:19.488 [2024-07-25 11:59:05.579093] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x20d66f0 00:17:19.488 [2024-07-25 11:59:05.579101] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:19.488 [2024-07-25 11:59:05.579285] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20e23d0 00:17:19.488 [2024-07-25 11:59:05.579394] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20d66f0 00:17:19.488 [2024-07-25 11:59:05.579403] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20d66f0 00:17:19.488 [2024-07-25 11:59:05.579552] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.488 NewBaseBdev 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:19.488 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.748 11:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:20.007 [ 00:17:20.007 { 00:17:20.007 "name": "NewBaseBdev", 00:17:20.007 "aliases": [ 00:17:20.007 "31bd3adb-6fe5-4399-9652-6985cff3e670" 00:17:20.007 ], 00:17:20.007 "product_name": "Malloc disk", 00:17:20.007 "block_size": 512, 00:17:20.007 "num_blocks": 65536, 00:17:20.007 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:20.007 "assigned_rate_limits": { 00:17:20.007 "rw_ios_per_sec": 0, 00:17:20.007 "rw_mbytes_per_sec": 0, 00:17:20.007 "r_mbytes_per_sec": 0, 00:17:20.007 "w_mbytes_per_sec": 0 00:17:20.007 }, 00:17:20.007 "claimed": true, 00:17:20.007 "claim_type": "exclusive_write", 00:17:20.007 "zoned": false, 00:17:20.007 "supported_io_types": { 00:17:20.007 "read": true, 00:17:20.007 "write": true, 00:17:20.007 "unmap": true, 00:17:20.007 "flush": true, 00:17:20.007 "reset": true, 00:17:20.007 "nvme_admin": false, 00:17:20.007 "nvme_io": false, 00:17:20.007 "nvme_io_md": false, 00:17:20.007 "write_zeroes": true, 00:17:20.007 "zcopy": true, 00:17:20.007 "get_zone_info": false, 00:17:20.007 "zone_management": false, 00:17:20.007 "zone_append": false, 00:17:20.007 "compare": false, 00:17:20.007 "compare_and_write": false, 00:17:20.007 "abort": true, 00:17:20.007 "seek_hole": false, 00:17:20.007 "seek_data": false, 00:17:20.007 "copy": true, 00:17:20.007 "nvme_iov_md": false 00:17:20.007 }, 00:17:20.007 "memory_domains": [ 00:17:20.007 { 00:17:20.007 "dma_device_id": "system", 00:17:20.007 "dma_device_type": 1 00:17:20.008 }, 00:17:20.008 { 00:17:20.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.008 "dma_device_type": 2 00:17:20.008 } 00:17:20.008 ], 00:17:20.008 "driver_specific": {} 00:17:20.008 } 00:17:20.008 ] 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.008 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.267 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.267 "name": "Existed_Raid", 00:17:20.267 "uuid": "9917bba3-08eb-4d3b-badf-5a60547d931c", 00:17:20.267 "strip_size_kb": 64, 00:17:20.267 "state": "online", 00:17:20.267 "raid_level": "raid0", 00:17:20.267 "superblock": false, 00:17:20.267 "num_base_bdevs": 4, 00:17:20.267 "num_base_bdevs_discovered": 4, 00:17:20.267 "num_base_bdevs_operational": 4, 00:17:20.267 "base_bdevs_list": [ 00:17:20.267 { 00:17:20.267 "name": "NewBaseBdev", 00:17:20.267 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:20.267 "is_configured": true, 00:17:20.267 "data_offset": 0, 00:17:20.267 "data_size": 65536 00:17:20.267 }, 00:17:20.267 { 00:17:20.267 "name": "BaseBdev2", 00:17:20.267 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:20.267 "is_configured": true, 00:17:20.267 "data_offset": 0, 00:17:20.267 "data_size": 65536 00:17:20.267 }, 00:17:20.267 { 00:17:20.267 "name": "BaseBdev3", 00:17:20.267 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:20.267 "is_configured": true, 00:17:20.267 "data_offset": 0, 00:17:20.267 "data_size": 65536 00:17:20.267 }, 00:17:20.267 { 00:17:20.267 "name": "BaseBdev4", 00:17:20.267 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:20.267 "is_configured": true, 00:17:20.268 "data_offset": 0, 00:17:20.268 "data_size": 65536 00:17:20.268 } 00:17:20.268 ] 00:17:20.268 }' 00:17:20.268 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.268 11:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:20.869 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:21.128 [2024-07-25 11:59:06.963015] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.128 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:21.128 "name": "Existed_Raid", 00:17:21.128 "aliases": [ 00:17:21.128 "9917bba3-08eb-4d3b-badf-5a60547d931c" 00:17:21.128 ], 00:17:21.128 "product_name": "Raid Volume", 00:17:21.128 "block_size": 512, 00:17:21.128 "num_blocks": 262144, 00:17:21.128 "uuid": "9917bba3-08eb-4d3b-badf-5a60547d931c", 00:17:21.128 "assigned_rate_limits": { 00:17:21.128 "rw_ios_per_sec": 0, 00:17:21.128 "rw_mbytes_per_sec": 0, 00:17:21.128 "r_mbytes_per_sec": 0, 00:17:21.128 "w_mbytes_per_sec": 0 00:17:21.128 }, 00:17:21.128 "claimed": false, 00:17:21.128 "zoned": false, 00:17:21.128 "supported_io_types": { 00:17:21.128 "read": true, 00:17:21.128 "write": true, 00:17:21.128 "unmap": true, 00:17:21.128 "flush": true, 00:17:21.128 "reset": true, 00:17:21.128 "nvme_admin": false, 00:17:21.128 "nvme_io": false, 00:17:21.128 "nvme_io_md": false, 00:17:21.128 "write_zeroes": true, 00:17:21.128 "zcopy": false, 00:17:21.128 "get_zone_info": false, 00:17:21.128 "zone_management": false, 00:17:21.128 "zone_append": false, 00:17:21.128 "compare": false, 00:17:21.128 "compare_and_write": false, 00:17:21.128 "abort": false, 00:17:21.128 "seek_hole": false, 00:17:21.128 "seek_data": false, 00:17:21.128 "copy": false, 00:17:21.128 "nvme_iov_md": false 00:17:21.128 }, 00:17:21.128 "memory_domains": [ 00:17:21.128 { 00:17:21.128 "dma_device_id": "system", 00:17:21.128 "dma_device_type": 1 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.128 "dma_device_type": 2 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "system", 00:17:21.128 "dma_device_type": 1 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.128 "dma_device_type": 2 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "system", 00:17:21.128 "dma_device_type": 1 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.128 "dma_device_type": 2 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "system", 00:17:21.128 "dma_device_type": 1 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.128 "dma_device_type": 2 00:17:21.128 } 00:17:21.128 ], 00:17:21.128 "driver_specific": { 00:17:21.128 "raid": { 00:17:21.128 "uuid": "9917bba3-08eb-4d3b-badf-5a60547d931c", 00:17:21.128 "strip_size_kb": 64, 00:17:21.128 "state": "online", 00:17:21.128 "raid_level": "raid0", 00:17:21.128 "superblock": false, 00:17:21.128 "num_base_bdevs": 4, 00:17:21.128 "num_base_bdevs_discovered": 4, 00:17:21.128 "num_base_bdevs_operational": 4, 00:17:21.128 "base_bdevs_list": [ 00:17:21.128 { 00:17:21.128 "name": "NewBaseBdev", 00:17:21.128 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:21.128 "is_configured": true, 00:17:21.128 "data_offset": 0, 00:17:21.128 "data_size": 65536 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "name": "BaseBdev2", 00:17:21.128 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:21.128 "is_configured": true, 00:17:21.128 "data_offset": 0, 00:17:21.128 "data_size": 65536 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "name": "BaseBdev3", 00:17:21.128 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:21.128 "is_configured": true, 00:17:21.128 "data_offset": 0, 00:17:21.128 "data_size": 65536 00:17:21.128 }, 00:17:21.128 { 00:17:21.128 "name": "BaseBdev4", 00:17:21.128 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:21.128 "is_configured": true, 00:17:21.128 "data_offset": 0, 00:17:21.128 "data_size": 65536 00:17:21.128 } 00:17:21.128 ] 00:17:21.128 } 00:17:21.128 } 00:17:21.128 }' 00:17:21.128 11:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.128 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:21.128 BaseBdev2 00:17:21.128 BaseBdev3 00:17:21.128 BaseBdev4' 00:17:21.128 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.128 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:21.128 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.387 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.387 "name": "NewBaseBdev", 00:17:21.387 "aliases": [ 00:17:21.387 "31bd3adb-6fe5-4399-9652-6985cff3e670" 00:17:21.387 ], 00:17:21.387 "product_name": "Malloc disk", 00:17:21.387 "block_size": 512, 00:17:21.387 "num_blocks": 65536, 00:17:21.387 "uuid": "31bd3adb-6fe5-4399-9652-6985cff3e670", 00:17:21.387 "assigned_rate_limits": { 00:17:21.387 "rw_ios_per_sec": 0, 00:17:21.387 "rw_mbytes_per_sec": 0, 00:17:21.387 "r_mbytes_per_sec": 0, 00:17:21.387 "w_mbytes_per_sec": 0 00:17:21.387 }, 00:17:21.387 "claimed": true, 00:17:21.387 "claim_type": "exclusive_write", 00:17:21.387 "zoned": false, 00:17:21.387 "supported_io_types": { 00:17:21.387 "read": true, 00:17:21.387 "write": true, 00:17:21.387 "unmap": true, 00:17:21.387 "flush": true, 00:17:21.387 "reset": true, 00:17:21.387 "nvme_admin": false, 00:17:21.387 "nvme_io": false, 00:17:21.387 "nvme_io_md": false, 00:17:21.387 "write_zeroes": true, 00:17:21.387 "zcopy": true, 00:17:21.387 "get_zone_info": false, 00:17:21.387 "zone_management": false, 00:17:21.387 "zone_append": false, 00:17:21.387 "compare": false, 00:17:21.387 "compare_and_write": false, 00:17:21.387 "abort": true, 00:17:21.388 "seek_hole": false, 00:17:21.388 "seek_data": false, 00:17:21.388 "copy": true, 00:17:21.388 "nvme_iov_md": false 00:17:21.388 }, 00:17:21.388 "memory_domains": [ 00:17:21.388 { 00:17:21.388 "dma_device_id": "system", 00:17:21.388 "dma_device_type": 1 00:17:21.388 }, 00:17:21.388 { 00:17:21.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.388 "dma_device_type": 2 00:17:21.388 } 00:17:21.388 ], 00:17:21.388 "driver_specific": {} 00:17:21.388 }' 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.388 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.646 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.646 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.646 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.646 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:21.646 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.904 "name": "BaseBdev2", 00:17:21.904 "aliases": [ 00:17:21.904 "940f4bb6-7314-4030-a66a-9355795825bd" 00:17:21.904 ], 00:17:21.904 "product_name": "Malloc disk", 00:17:21.904 "block_size": 512, 00:17:21.904 "num_blocks": 65536, 00:17:21.904 "uuid": "940f4bb6-7314-4030-a66a-9355795825bd", 00:17:21.904 "assigned_rate_limits": { 00:17:21.904 "rw_ios_per_sec": 0, 00:17:21.904 "rw_mbytes_per_sec": 0, 00:17:21.904 "r_mbytes_per_sec": 0, 00:17:21.904 "w_mbytes_per_sec": 0 00:17:21.904 }, 00:17:21.904 "claimed": true, 00:17:21.904 "claim_type": "exclusive_write", 00:17:21.904 "zoned": false, 00:17:21.904 "supported_io_types": { 00:17:21.904 "read": true, 00:17:21.904 "write": true, 00:17:21.904 "unmap": true, 00:17:21.904 "flush": true, 00:17:21.904 "reset": true, 00:17:21.904 "nvme_admin": false, 00:17:21.904 "nvme_io": false, 00:17:21.904 "nvme_io_md": false, 00:17:21.904 "write_zeroes": true, 00:17:21.904 "zcopy": true, 00:17:21.904 "get_zone_info": false, 00:17:21.904 "zone_management": false, 00:17:21.904 "zone_append": false, 00:17:21.904 "compare": false, 00:17:21.904 "compare_and_write": false, 00:17:21.904 "abort": true, 00:17:21.904 "seek_hole": false, 00:17:21.904 "seek_data": false, 00:17:21.904 "copy": true, 00:17:21.904 "nvme_iov_md": false 00:17:21.904 }, 00:17:21.904 "memory_domains": [ 00:17:21.904 { 00:17:21.904 "dma_device_id": "system", 00:17:21.904 "dma_device_type": 1 00:17:21.904 }, 00:17:21.904 { 00:17:21.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.904 "dma_device_type": 2 00:17:21.904 } 00:17:21.904 ], 00:17:21.904 "driver_specific": {} 00:17:21.904 }' 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.904 11:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.904 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.163 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.163 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.163 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.163 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.163 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.163 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.164 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.423 "name": "BaseBdev3", 00:17:22.423 "aliases": [ 00:17:22.423 "9f966478-4e35-4cf2-95ca-f90e112fe21f" 00:17:22.423 ], 00:17:22.423 "product_name": "Malloc disk", 00:17:22.423 "block_size": 512, 00:17:22.423 "num_blocks": 65536, 00:17:22.423 "uuid": "9f966478-4e35-4cf2-95ca-f90e112fe21f", 00:17:22.423 "assigned_rate_limits": { 00:17:22.423 "rw_ios_per_sec": 0, 00:17:22.423 "rw_mbytes_per_sec": 0, 00:17:22.423 "r_mbytes_per_sec": 0, 00:17:22.423 "w_mbytes_per_sec": 0 00:17:22.423 }, 00:17:22.423 "claimed": true, 00:17:22.423 "claim_type": "exclusive_write", 00:17:22.423 "zoned": false, 00:17:22.423 "supported_io_types": { 00:17:22.423 "read": true, 00:17:22.423 "write": true, 00:17:22.423 "unmap": true, 00:17:22.423 "flush": true, 00:17:22.423 "reset": true, 00:17:22.423 "nvme_admin": false, 00:17:22.423 "nvme_io": false, 00:17:22.423 "nvme_io_md": false, 00:17:22.423 "write_zeroes": true, 00:17:22.423 "zcopy": true, 00:17:22.423 "get_zone_info": false, 00:17:22.423 "zone_management": false, 00:17:22.423 "zone_append": false, 00:17:22.423 "compare": false, 00:17:22.423 "compare_and_write": false, 00:17:22.423 "abort": true, 00:17:22.423 "seek_hole": false, 00:17:22.423 "seek_data": false, 00:17:22.423 "copy": true, 00:17:22.423 "nvme_iov_md": false 00:17:22.423 }, 00:17:22.423 "memory_domains": [ 00:17:22.423 { 00:17:22.423 "dma_device_id": "system", 00:17:22.423 "dma_device_type": 1 00:17:22.423 }, 00:17:22.423 { 00:17:22.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.423 "dma_device_type": 2 00:17:22.423 } 00:17:22.423 ], 00:17:22.423 "driver_specific": {} 00:17:22.423 }' 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.423 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:22.689 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.950 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.950 "name": "BaseBdev4", 00:17:22.950 "aliases": [ 00:17:22.950 "e83d80b7-d84e-4337-9ecd-511ae5120758" 00:17:22.950 ], 00:17:22.950 "product_name": "Malloc disk", 00:17:22.950 "block_size": 512, 00:17:22.950 "num_blocks": 65536, 00:17:22.950 "uuid": "e83d80b7-d84e-4337-9ecd-511ae5120758", 00:17:22.950 "assigned_rate_limits": { 00:17:22.950 "rw_ios_per_sec": 0, 00:17:22.950 "rw_mbytes_per_sec": 0, 00:17:22.950 "r_mbytes_per_sec": 0, 00:17:22.950 "w_mbytes_per_sec": 0 00:17:22.950 }, 00:17:22.950 "claimed": true, 00:17:22.950 "claim_type": "exclusive_write", 00:17:22.950 "zoned": false, 00:17:22.950 "supported_io_types": { 00:17:22.950 "read": true, 00:17:22.950 "write": true, 00:17:22.950 "unmap": true, 00:17:22.950 "flush": true, 00:17:22.950 "reset": true, 00:17:22.950 "nvme_admin": false, 00:17:22.950 "nvme_io": false, 00:17:22.950 "nvme_io_md": false, 00:17:22.950 "write_zeroes": true, 00:17:22.950 "zcopy": true, 00:17:22.950 "get_zone_info": false, 00:17:22.950 "zone_management": false, 00:17:22.950 "zone_append": false, 00:17:22.950 "compare": false, 00:17:22.950 "compare_and_write": false, 00:17:22.950 "abort": true, 00:17:22.950 "seek_hole": false, 00:17:22.950 "seek_data": false, 00:17:22.950 "copy": true, 00:17:22.950 "nvme_iov_md": false 00:17:22.950 }, 00:17:22.950 "memory_domains": [ 00:17:22.950 { 00:17:22.950 "dma_device_id": "system", 00:17:22.950 "dma_device_type": 1 00:17:22.950 }, 00:17:22.950 { 00:17:22.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.950 "dma_device_type": 2 00:17:22.950 } 00:17:22.950 ], 00:17:22.950 "driver_specific": {} 00:17:22.950 }' 00:17:22.950 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.950 11:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.950 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.950 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.950 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.208 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.208 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.208 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.208 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.208 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.209 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.209 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.209 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:23.467 [2024-07-25 11:59:09.497427] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.467 [2024-07-25 11:59:09.497450] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.467 [2024-07-25 11:59:09.497497] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.467 [2024-07-25 11:59:09.497551] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.467 [2024-07-25 11:59:09.497562] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20d66f0 name Existed_Raid, state offline 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4165535 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4165535 ']' 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4165535 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4165535 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4165535' 00:17:23.467 killing process with pid 4165535 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4165535 00:17:23.467 [2024-07-25 11:59:09.575935] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.467 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4165535 00:17:23.726 [2024-07-25 11:59:09.608546] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.726 11:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:23.726 00:17:23.726 real 0m30.162s 00:17:23.726 user 0m55.218s 00:17:23.726 sys 0m5.475s 00:17:23.726 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.726 11:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.726 ************************************ 00:17:23.726 END TEST raid_state_function_test 00:17:23.726 ************************************ 00:17:23.726 11:59:09 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:23.726 11:59:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:23.726 11:59:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.726 11:59:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.985 ************************************ 00:17:23.985 START TEST raid_state_function_test_sb 00:17:23.985 ************************************ 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4171236 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4171236' 00:17:23.985 Process raid pid: 4171236 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4171236 /var/tmp/spdk-raid.sock 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4171236 ']' 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:23.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.985 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:23.985 [2024-07-25 11:59:09.938215] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:17:23.985 [2024-07-25 11:59:09.938271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.985 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:23.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.985 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:23.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.985 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:23.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:23.986 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:23.986 [2024-07-25 11:59:10.071092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.244 [2024-07-25 11:59:10.157110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.244 [2024-07-25 11:59:10.222184] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.244 [2024-07-25 11:59:10.222218] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.811 11:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.812 11:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:24.812 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:25.379 [2024-07-25 11:59:11.260946] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.379 [2024-07-25 11:59:11.260981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.379 [2024-07-25 11:59:11.260991] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.379 [2024-07-25 11:59:11.261003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.379 [2024-07-25 11:59:11.261011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.379 [2024-07-25 11:59:11.261020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.379 [2024-07-25 11:59:11.261028] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:25.379 [2024-07-25 11:59:11.261038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.379 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.638 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.638 "name": "Existed_Raid", 00:17:25.638 "uuid": "7be327c8-4b99-4ee5-87cf-46899932e71a", 00:17:25.638 "strip_size_kb": 64, 00:17:25.638 "state": "configuring", 00:17:25.638 "raid_level": "raid0", 00:17:25.638 "superblock": true, 00:17:25.638 "num_base_bdevs": 4, 00:17:25.638 "num_base_bdevs_discovered": 0, 00:17:25.638 "num_base_bdevs_operational": 4, 00:17:25.638 "base_bdevs_list": [ 00:17:25.638 { 00:17:25.638 "name": "BaseBdev1", 00:17:25.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.638 "is_configured": false, 00:17:25.638 "data_offset": 0, 00:17:25.638 "data_size": 0 00:17:25.638 }, 00:17:25.638 { 00:17:25.638 "name": "BaseBdev2", 00:17:25.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.638 "is_configured": false, 00:17:25.638 "data_offset": 0, 00:17:25.638 "data_size": 0 00:17:25.638 }, 00:17:25.638 { 00:17:25.638 "name": "BaseBdev3", 00:17:25.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.638 "is_configured": false, 00:17:25.638 "data_offset": 0, 00:17:25.638 "data_size": 0 00:17:25.638 }, 00:17:25.638 { 00:17:25.638 "name": "BaseBdev4", 00:17:25.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.638 "is_configured": false, 00:17:25.638 "data_offset": 0, 00:17:25.638 "data_size": 0 00:17:25.638 } 00:17:25.638 ] 00:17:25.638 }' 00:17:25.638 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.638 11:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.206 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.465 [2024-07-25 11:59:12.568227] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.465 [2024-07-25 11:59:12.568256] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1248f60 name Existed_Raid, state configuring 00:17:26.725 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.725 [2024-07-25 11:59:12.808878] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.725 [2024-07-25 11:59:12.808907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.725 [2024-07-25 11:59:12.808917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.725 [2024-07-25 11:59:12.808928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.725 [2024-07-25 11:59:12.808938] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.725 [2024-07-25 11:59:12.808949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.725 [2024-07-25 11:59:12.808958] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.725 [2024-07-25 11:59:12.808968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.725 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.293 [2024-07-25 11:59:13.315588] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.294 BaseBdev1 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:27.294 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.552 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.811 [ 00:17:27.811 { 00:17:27.811 "name": "BaseBdev1", 00:17:27.811 "aliases": [ 00:17:27.811 "716578a8-d2e9-4262-bfa9-84be54100e53" 00:17:27.811 ], 00:17:27.811 "product_name": "Malloc disk", 00:17:27.811 "block_size": 512, 00:17:27.811 "num_blocks": 65536, 00:17:27.811 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:27.811 "assigned_rate_limits": { 00:17:27.811 "rw_ios_per_sec": 0, 00:17:27.811 "rw_mbytes_per_sec": 0, 00:17:27.811 "r_mbytes_per_sec": 0, 00:17:27.811 "w_mbytes_per_sec": 0 00:17:27.811 }, 00:17:27.811 "claimed": true, 00:17:27.811 "claim_type": "exclusive_write", 00:17:27.811 "zoned": false, 00:17:27.811 "supported_io_types": { 00:17:27.811 "read": true, 00:17:27.811 "write": true, 00:17:27.811 "unmap": true, 00:17:27.811 "flush": true, 00:17:27.811 "reset": true, 00:17:27.811 "nvme_admin": false, 00:17:27.811 "nvme_io": false, 00:17:27.811 "nvme_io_md": false, 00:17:27.811 "write_zeroes": true, 00:17:27.811 "zcopy": true, 00:17:27.811 "get_zone_info": false, 00:17:27.811 "zone_management": false, 00:17:27.811 "zone_append": false, 00:17:27.811 "compare": false, 00:17:27.811 "compare_and_write": false, 00:17:27.811 "abort": true, 00:17:27.811 "seek_hole": false, 00:17:27.811 "seek_data": false, 00:17:27.811 "copy": true, 00:17:27.811 "nvme_iov_md": false 00:17:27.811 }, 00:17:27.811 "memory_domains": [ 00:17:27.811 { 00:17:27.811 "dma_device_id": "system", 00:17:27.811 "dma_device_type": 1 00:17:27.811 }, 00:17:27.811 { 00:17:27.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.811 "dma_device_type": 2 00:17:27.811 } 00:17:27.811 ], 00:17:27.811 "driver_specific": {} 00:17:27.811 } 00:17:27.811 ] 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.811 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.070 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.070 "name": "Existed_Raid", 00:17:28.070 "uuid": "05d8f008-7df5-4970-baee-0988c797efc4", 00:17:28.070 "strip_size_kb": 64, 00:17:28.070 "state": "configuring", 00:17:28.070 "raid_level": "raid0", 00:17:28.070 "superblock": true, 00:17:28.070 "num_base_bdevs": 4, 00:17:28.070 "num_base_bdevs_discovered": 1, 00:17:28.070 "num_base_bdevs_operational": 4, 00:17:28.070 "base_bdevs_list": [ 00:17:28.070 { 00:17:28.070 "name": "BaseBdev1", 00:17:28.070 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:28.070 "is_configured": true, 00:17:28.070 "data_offset": 2048, 00:17:28.070 "data_size": 63488 00:17:28.070 }, 00:17:28.070 { 00:17:28.070 "name": "BaseBdev2", 00:17:28.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.070 "is_configured": false, 00:17:28.070 "data_offset": 0, 00:17:28.070 "data_size": 0 00:17:28.070 }, 00:17:28.070 { 00:17:28.070 "name": "BaseBdev3", 00:17:28.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.070 "is_configured": false, 00:17:28.070 "data_offset": 0, 00:17:28.070 "data_size": 0 00:17:28.070 }, 00:17:28.070 { 00:17:28.070 "name": "BaseBdev4", 00:17:28.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.070 "is_configured": false, 00:17:28.070 "data_offset": 0, 00:17:28.070 "data_size": 0 00:17:28.070 } 00:17:28.070 ] 00:17:28.070 }' 00:17:28.070 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.070 11:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:28.897 [2024-07-25 11:59:14.759380] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.898 [2024-07-25 11:59:14.759414] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12487d0 name Existed_Raid, state configuring 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:28.898 [2024-07-25 11:59:14.971977] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.898 [2024-07-25 11:59:14.973373] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.898 [2024-07-25 11:59:14.973403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.898 [2024-07-25 11:59:14.973412] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.898 [2024-07-25 11:59:14.973423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.898 [2024-07-25 11:59:14.973431] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:28.898 [2024-07-25 11:59:14.973445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.898 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.157 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.157 "name": "Existed_Raid", 00:17:29.157 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:29.157 "strip_size_kb": 64, 00:17:29.157 "state": "configuring", 00:17:29.157 "raid_level": "raid0", 00:17:29.157 "superblock": true, 00:17:29.157 "num_base_bdevs": 4, 00:17:29.157 "num_base_bdevs_discovered": 1, 00:17:29.157 "num_base_bdevs_operational": 4, 00:17:29.157 "base_bdevs_list": [ 00:17:29.157 { 00:17:29.157 "name": "BaseBdev1", 00:17:29.157 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:29.157 "is_configured": true, 00:17:29.157 "data_offset": 2048, 00:17:29.157 "data_size": 63488 00:17:29.157 }, 00:17:29.157 { 00:17:29.157 "name": "BaseBdev2", 00:17:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.157 "is_configured": false, 00:17:29.157 "data_offset": 0, 00:17:29.157 "data_size": 0 00:17:29.157 }, 00:17:29.157 { 00:17:29.157 "name": "BaseBdev3", 00:17:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.157 "is_configured": false, 00:17:29.157 "data_offset": 0, 00:17:29.157 "data_size": 0 00:17:29.157 }, 00:17:29.157 { 00:17:29.157 "name": "BaseBdev4", 00:17:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.157 "is_configured": false, 00:17:29.157 "data_offset": 0, 00:17:29.157 "data_size": 0 00:17:29.157 } 00:17:29.157 ] 00:17:29.157 }' 00:17:29.157 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.157 11:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.723 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:29.982 [2024-07-25 11:59:16.005728] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.982 BaseBdev2 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.982 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.240 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:30.498 [ 00:17:30.498 { 00:17:30.498 "name": "BaseBdev2", 00:17:30.498 "aliases": [ 00:17:30.498 "3e70771d-278e-4d58-b36f-b75ba9345ff5" 00:17:30.498 ], 00:17:30.498 "product_name": "Malloc disk", 00:17:30.498 "block_size": 512, 00:17:30.498 "num_blocks": 65536, 00:17:30.498 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:30.498 "assigned_rate_limits": { 00:17:30.498 "rw_ios_per_sec": 0, 00:17:30.498 "rw_mbytes_per_sec": 0, 00:17:30.498 "r_mbytes_per_sec": 0, 00:17:30.498 "w_mbytes_per_sec": 0 00:17:30.498 }, 00:17:30.498 "claimed": true, 00:17:30.498 "claim_type": "exclusive_write", 00:17:30.498 "zoned": false, 00:17:30.498 "supported_io_types": { 00:17:30.498 "read": true, 00:17:30.498 "write": true, 00:17:30.498 "unmap": true, 00:17:30.498 "flush": true, 00:17:30.498 "reset": true, 00:17:30.498 "nvme_admin": false, 00:17:30.498 "nvme_io": false, 00:17:30.498 "nvme_io_md": false, 00:17:30.498 "write_zeroes": true, 00:17:30.498 "zcopy": true, 00:17:30.498 "get_zone_info": false, 00:17:30.498 "zone_management": false, 00:17:30.498 "zone_append": false, 00:17:30.498 "compare": false, 00:17:30.498 "compare_and_write": false, 00:17:30.498 "abort": true, 00:17:30.498 "seek_hole": false, 00:17:30.498 "seek_data": false, 00:17:30.498 "copy": true, 00:17:30.498 "nvme_iov_md": false 00:17:30.498 }, 00:17:30.498 "memory_domains": [ 00:17:30.498 { 00:17:30.498 "dma_device_id": "system", 00:17:30.498 "dma_device_type": 1 00:17:30.498 }, 00:17:30.498 { 00:17:30.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.498 "dma_device_type": 2 00:17:30.498 } 00:17:30.498 ], 00:17:30.498 "driver_specific": {} 00:17:30.498 } 00:17:30.498 ] 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.498 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.758 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.758 "name": "Existed_Raid", 00:17:30.758 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:30.758 "strip_size_kb": 64, 00:17:30.758 "state": "configuring", 00:17:30.758 "raid_level": "raid0", 00:17:30.758 "superblock": true, 00:17:30.758 "num_base_bdevs": 4, 00:17:30.758 "num_base_bdevs_discovered": 2, 00:17:30.758 "num_base_bdevs_operational": 4, 00:17:30.758 "base_bdevs_list": [ 00:17:30.758 { 00:17:30.758 "name": "BaseBdev1", 00:17:30.758 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:30.758 "is_configured": true, 00:17:30.758 "data_offset": 2048, 00:17:30.758 "data_size": 63488 00:17:30.758 }, 00:17:30.758 { 00:17:30.758 "name": "BaseBdev2", 00:17:30.758 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:30.758 "is_configured": true, 00:17:30.758 "data_offset": 2048, 00:17:30.758 "data_size": 63488 00:17:30.758 }, 00:17:30.758 { 00:17:30.758 "name": "BaseBdev3", 00:17:30.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.758 "is_configured": false, 00:17:30.758 "data_offset": 0, 00:17:30.758 "data_size": 0 00:17:30.758 }, 00:17:30.758 { 00:17:30.758 "name": "BaseBdev4", 00:17:30.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.758 "is_configured": false, 00:17:30.758 "data_offset": 0, 00:17:30.758 "data_size": 0 00:17:30.758 } 00:17:30.758 ] 00:17:30.758 }' 00:17:30.758 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.758 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.325 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.584 [2024-07-25 11:59:17.456836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.584 BaseBdev3 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.584 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:32.152 [ 00:17:32.152 { 00:17:32.152 "name": "BaseBdev3", 00:17:32.152 "aliases": [ 00:17:32.152 "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a" 00:17:32.152 ], 00:17:32.152 "product_name": "Malloc disk", 00:17:32.152 "block_size": 512, 00:17:32.152 "num_blocks": 65536, 00:17:32.152 "uuid": "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a", 00:17:32.152 "assigned_rate_limits": { 00:17:32.152 "rw_ios_per_sec": 0, 00:17:32.152 "rw_mbytes_per_sec": 0, 00:17:32.152 "r_mbytes_per_sec": 0, 00:17:32.152 "w_mbytes_per_sec": 0 00:17:32.152 }, 00:17:32.152 "claimed": true, 00:17:32.152 "claim_type": "exclusive_write", 00:17:32.152 "zoned": false, 00:17:32.152 "supported_io_types": { 00:17:32.152 "read": true, 00:17:32.152 "write": true, 00:17:32.152 "unmap": true, 00:17:32.152 "flush": true, 00:17:32.152 "reset": true, 00:17:32.152 "nvme_admin": false, 00:17:32.152 "nvme_io": false, 00:17:32.152 "nvme_io_md": false, 00:17:32.152 "write_zeroes": true, 00:17:32.152 "zcopy": true, 00:17:32.152 "get_zone_info": false, 00:17:32.152 "zone_management": false, 00:17:32.152 "zone_append": false, 00:17:32.152 "compare": false, 00:17:32.152 "compare_and_write": false, 00:17:32.152 "abort": true, 00:17:32.152 "seek_hole": false, 00:17:32.152 "seek_data": false, 00:17:32.152 "copy": true, 00:17:32.152 "nvme_iov_md": false 00:17:32.152 }, 00:17:32.152 "memory_domains": [ 00:17:32.152 { 00:17:32.152 "dma_device_id": "system", 00:17:32.152 "dma_device_type": 1 00:17:32.152 }, 00:17:32.152 { 00:17:32.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.152 "dma_device_type": 2 00:17:32.152 } 00:17:32.152 ], 00:17:32.152 "driver_specific": {} 00:17:32.152 } 00:17:32.152 ] 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.152 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.411 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.411 "name": "Existed_Raid", 00:17:32.411 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:32.411 "strip_size_kb": 64, 00:17:32.411 "state": "configuring", 00:17:32.411 "raid_level": "raid0", 00:17:32.411 "superblock": true, 00:17:32.411 "num_base_bdevs": 4, 00:17:32.411 "num_base_bdevs_discovered": 3, 00:17:32.411 "num_base_bdevs_operational": 4, 00:17:32.411 "base_bdevs_list": [ 00:17:32.411 { 00:17:32.411 "name": "BaseBdev1", 00:17:32.411 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:32.411 "is_configured": true, 00:17:32.411 "data_offset": 2048, 00:17:32.411 "data_size": 63488 00:17:32.411 }, 00:17:32.411 { 00:17:32.411 "name": "BaseBdev2", 00:17:32.411 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:32.411 "is_configured": true, 00:17:32.411 "data_offset": 2048, 00:17:32.411 "data_size": 63488 00:17:32.411 }, 00:17:32.411 { 00:17:32.411 "name": "BaseBdev3", 00:17:32.411 "uuid": "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a", 00:17:32.411 "is_configured": true, 00:17:32.411 "data_offset": 2048, 00:17:32.411 "data_size": 63488 00:17:32.411 }, 00:17:32.411 { 00:17:32.411 "name": "BaseBdev4", 00:17:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.411 "is_configured": false, 00:17:32.411 "data_offset": 0, 00:17:32.411 "data_size": 0 00:17:32.411 } 00:17:32.411 ] 00:17:32.411 }' 00:17:32.411 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.411 11:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.976 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:33.235 [2024-07-25 11:59:19.168582] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:33.235 [2024-07-25 11:59:19.168731] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1249830 00:17:33.235 [2024-07-25 11:59:19.168743] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:33.235 [2024-07-25 11:59:19.168907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12401e0 00:17:33.235 [2024-07-25 11:59:19.169018] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1249830 00:17:33.235 [2024-07-25 11:59:19.169028] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1249830 00:17:33.235 [2024-07-25 11:59:19.169113] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.235 BaseBdev4 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:33.235 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:33.843 [ 00:17:33.843 { 00:17:33.843 "name": "BaseBdev4", 00:17:33.843 "aliases": [ 00:17:33.843 "9eb0b102-e143-46de-91c0-49594c8e3259" 00:17:33.843 ], 00:17:33.843 "product_name": "Malloc disk", 00:17:33.843 "block_size": 512, 00:17:33.843 "num_blocks": 65536, 00:17:33.843 "uuid": "9eb0b102-e143-46de-91c0-49594c8e3259", 00:17:33.843 "assigned_rate_limits": { 00:17:33.843 "rw_ios_per_sec": 0, 00:17:33.843 "rw_mbytes_per_sec": 0, 00:17:33.843 "r_mbytes_per_sec": 0, 00:17:33.843 "w_mbytes_per_sec": 0 00:17:33.843 }, 00:17:33.843 "claimed": true, 00:17:33.843 "claim_type": "exclusive_write", 00:17:33.843 "zoned": false, 00:17:33.843 "supported_io_types": { 00:17:33.843 "read": true, 00:17:33.843 "write": true, 00:17:33.843 "unmap": true, 00:17:33.843 "flush": true, 00:17:33.843 "reset": true, 00:17:33.843 "nvme_admin": false, 00:17:33.843 "nvme_io": false, 00:17:33.843 "nvme_io_md": false, 00:17:33.843 "write_zeroes": true, 00:17:33.843 "zcopy": true, 00:17:33.843 "get_zone_info": false, 00:17:33.843 "zone_management": false, 00:17:33.843 "zone_append": false, 00:17:33.843 "compare": false, 00:17:33.843 "compare_and_write": false, 00:17:33.843 "abort": true, 00:17:33.843 "seek_hole": false, 00:17:33.843 "seek_data": false, 00:17:33.843 "copy": true, 00:17:33.843 "nvme_iov_md": false 00:17:33.843 }, 00:17:33.843 "memory_domains": [ 00:17:33.843 { 00:17:33.843 "dma_device_id": "system", 00:17:33.843 "dma_device_type": 1 00:17:33.843 }, 00:17:33.843 { 00:17:33.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.843 "dma_device_type": 2 00:17:33.843 } 00:17:33.843 ], 00:17:33.843 "driver_specific": {} 00:17:33.843 } 00:17:33.843 ] 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.843 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.103 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.103 "name": "Existed_Raid", 00:17:34.103 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:34.103 "strip_size_kb": 64, 00:17:34.103 "state": "online", 00:17:34.103 "raid_level": "raid0", 00:17:34.103 "superblock": true, 00:17:34.103 "num_base_bdevs": 4, 00:17:34.103 "num_base_bdevs_discovered": 4, 00:17:34.103 "num_base_bdevs_operational": 4, 00:17:34.103 "base_bdevs_list": [ 00:17:34.103 { 00:17:34.103 "name": "BaseBdev1", 00:17:34.103 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:34.103 "is_configured": true, 00:17:34.103 "data_offset": 2048, 00:17:34.103 "data_size": 63488 00:17:34.103 }, 00:17:34.103 { 00:17:34.103 "name": "BaseBdev2", 00:17:34.103 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:34.103 "is_configured": true, 00:17:34.103 "data_offset": 2048, 00:17:34.103 "data_size": 63488 00:17:34.103 }, 00:17:34.103 { 00:17:34.103 "name": "BaseBdev3", 00:17:34.103 "uuid": "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a", 00:17:34.103 "is_configured": true, 00:17:34.103 "data_offset": 2048, 00:17:34.103 "data_size": 63488 00:17:34.103 }, 00:17:34.103 { 00:17:34.103 "name": "BaseBdev4", 00:17:34.103 "uuid": "9eb0b102-e143-46de-91c0-49594c8e3259", 00:17:34.103 "is_configured": true, 00:17:34.103 "data_offset": 2048, 00:17:34.103 "data_size": 63488 00:17:34.103 } 00:17:34.103 ] 00:17:34.103 }' 00:17:34.103 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.103 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:34.670 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:34.927 [2024-07-25 11:59:20.945601] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.928 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:34.928 "name": "Existed_Raid", 00:17:34.928 "aliases": [ 00:17:34.928 "26809cd3-0e7f-4413-9f1e-d6cfbed4b537" 00:17:34.928 ], 00:17:34.928 "product_name": "Raid Volume", 00:17:34.928 "block_size": 512, 00:17:34.928 "num_blocks": 253952, 00:17:34.928 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:34.928 "assigned_rate_limits": { 00:17:34.928 "rw_ios_per_sec": 0, 00:17:34.928 "rw_mbytes_per_sec": 0, 00:17:34.928 "r_mbytes_per_sec": 0, 00:17:34.928 "w_mbytes_per_sec": 0 00:17:34.928 }, 00:17:34.928 "claimed": false, 00:17:34.928 "zoned": false, 00:17:34.928 "supported_io_types": { 00:17:34.928 "read": true, 00:17:34.928 "write": true, 00:17:34.928 "unmap": true, 00:17:34.928 "flush": true, 00:17:34.928 "reset": true, 00:17:34.928 "nvme_admin": false, 00:17:34.928 "nvme_io": false, 00:17:34.928 "nvme_io_md": false, 00:17:34.928 "write_zeroes": true, 00:17:34.928 "zcopy": false, 00:17:34.928 "get_zone_info": false, 00:17:34.928 "zone_management": false, 00:17:34.928 "zone_append": false, 00:17:34.928 "compare": false, 00:17:34.928 "compare_and_write": false, 00:17:34.928 "abort": false, 00:17:34.928 "seek_hole": false, 00:17:34.928 "seek_data": false, 00:17:34.928 "copy": false, 00:17:34.928 "nvme_iov_md": false 00:17:34.928 }, 00:17:34.928 "memory_domains": [ 00:17:34.928 { 00:17:34.928 "dma_device_id": "system", 00:17:34.928 "dma_device_type": 1 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.928 "dma_device_type": 2 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "system", 00:17:34.928 "dma_device_type": 1 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.928 "dma_device_type": 2 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "system", 00:17:34.928 "dma_device_type": 1 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.928 "dma_device_type": 2 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "system", 00:17:34.928 "dma_device_type": 1 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.928 "dma_device_type": 2 00:17:34.928 } 00:17:34.928 ], 00:17:34.928 "driver_specific": { 00:17:34.928 "raid": { 00:17:34.928 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:34.928 "strip_size_kb": 64, 00:17:34.928 "state": "online", 00:17:34.928 "raid_level": "raid0", 00:17:34.928 "superblock": true, 00:17:34.928 "num_base_bdevs": 4, 00:17:34.928 "num_base_bdevs_discovered": 4, 00:17:34.928 "num_base_bdevs_operational": 4, 00:17:34.928 "base_bdevs_list": [ 00:17:34.928 { 00:17:34.928 "name": "BaseBdev1", 00:17:34.928 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "name": "BaseBdev2", 00:17:34.928 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "name": "BaseBdev3", 00:17:34.928 "uuid": "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 }, 00:17:34.928 { 00:17:34.928 "name": "BaseBdev4", 00:17:34.928 "uuid": "9eb0b102-e143-46de-91c0-49594c8e3259", 00:17:34.928 "is_configured": true, 00:17:34.928 "data_offset": 2048, 00:17:34.928 "data_size": 63488 00:17:34.928 } 00:17:34.928 ] 00:17:34.928 } 00:17:34.928 } 00:17:34.928 }' 00:17:34.928 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.928 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:34.928 BaseBdev2 00:17:34.928 BaseBdev3 00:17:34.928 BaseBdev4' 00:17:34.928 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:34.928 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:34.928 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.186 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.186 "name": "BaseBdev1", 00:17:35.186 "aliases": [ 00:17:35.186 "716578a8-d2e9-4262-bfa9-84be54100e53" 00:17:35.186 ], 00:17:35.186 "product_name": "Malloc disk", 00:17:35.186 "block_size": 512, 00:17:35.186 "num_blocks": 65536, 00:17:35.186 "uuid": "716578a8-d2e9-4262-bfa9-84be54100e53", 00:17:35.186 "assigned_rate_limits": { 00:17:35.186 "rw_ios_per_sec": 0, 00:17:35.186 "rw_mbytes_per_sec": 0, 00:17:35.186 "r_mbytes_per_sec": 0, 00:17:35.186 "w_mbytes_per_sec": 0 00:17:35.186 }, 00:17:35.186 "claimed": true, 00:17:35.186 "claim_type": "exclusive_write", 00:17:35.186 "zoned": false, 00:17:35.186 "supported_io_types": { 00:17:35.186 "read": true, 00:17:35.186 "write": true, 00:17:35.186 "unmap": true, 00:17:35.186 "flush": true, 00:17:35.186 "reset": true, 00:17:35.186 "nvme_admin": false, 00:17:35.186 "nvme_io": false, 00:17:35.186 "nvme_io_md": false, 00:17:35.186 "write_zeroes": true, 00:17:35.186 "zcopy": true, 00:17:35.186 "get_zone_info": false, 00:17:35.186 "zone_management": false, 00:17:35.186 "zone_append": false, 00:17:35.186 "compare": false, 00:17:35.186 "compare_and_write": false, 00:17:35.186 "abort": true, 00:17:35.186 "seek_hole": false, 00:17:35.186 "seek_data": false, 00:17:35.186 "copy": true, 00:17:35.186 "nvme_iov_md": false 00:17:35.186 }, 00:17:35.186 "memory_domains": [ 00:17:35.186 { 00:17:35.186 "dma_device_id": "system", 00:17:35.186 "dma_device_type": 1 00:17:35.186 }, 00:17:35.186 { 00:17:35.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.186 "dma_device_type": 2 00:17:35.186 } 00:17:35.186 ], 00:17:35.186 "driver_specific": {} 00:17:35.186 }' 00:17:35.186 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.186 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.444 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.702 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:35.702 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.702 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:35.702 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.702 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.702 "name": "BaseBdev2", 00:17:35.702 "aliases": [ 00:17:35.702 "3e70771d-278e-4d58-b36f-b75ba9345ff5" 00:17:35.702 ], 00:17:35.702 "product_name": "Malloc disk", 00:17:35.702 "block_size": 512, 00:17:35.702 "num_blocks": 65536, 00:17:35.702 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:35.702 "assigned_rate_limits": { 00:17:35.702 "rw_ios_per_sec": 0, 00:17:35.702 "rw_mbytes_per_sec": 0, 00:17:35.702 "r_mbytes_per_sec": 0, 00:17:35.702 "w_mbytes_per_sec": 0 00:17:35.702 }, 00:17:35.702 "claimed": true, 00:17:35.702 "claim_type": "exclusive_write", 00:17:35.702 "zoned": false, 00:17:35.702 "supported_io_types": { 00:17:35.702 "read": true, 00:17:35.702 "write": true, 00:17:35.702 "unmap": true, 00:17:35.702 "flush": true, 00:17:35.702 "reset": true, 00:17:35.702 "nvme_admin": false, 00:17:35.702 "nvme_io": false, 00:17:35.702 "nvme_io_md": false, 00:17:35.702 "write_zeroes": true, 00:17:35.703 "zcopy": true, 00:17:35.703 "get_zone_info": false, 00:17:35.703 "zone_management": false, 00:17:35.703 "zone_append": false, 00:17:35.703 "compare": false, 00:17:35.703 "compare_and_write": false, 00:17:35.703 "abort": true, 00:17:35.703 "seek_hole": false, 00:17:35.703 "seek_data": false, 00:17:35.703 "copy": true, 00:17:35.703 "nvme_iov_md": false 00:17:35.703 }, 00:17:35.703 "memory_domains": [ 00:17:35.703 { 00:17:35.703 "dma_device_id": "system", 00:17:35.703 "dma_device_type": 1 00:17:35.703 }, 00:17:35.703 { 00:17:35.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.703 "dma_device_type": 2 00:17:35.703 } 00:17:35.703 ], 00:17:35.703 "driver_specific": {} 00:17:35.703 }' 00:17:35.703 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.961 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.961 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.961 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.961 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.961 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:35.961 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.961 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.961 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:35.961 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.219 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.219 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:36.219 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.219 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:36.219 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.478 "name": "BaseBdev3", 00:17:36.478 "aliases": [ 00:17:36.478 "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a" 00:17:36.478 ], 00:17:36.478 "product_name": "Malloc disk", 00:17:36.478 "block_size": 512, 00:17:36.478 "num_blocks": 65536, 00:17:36.478 "uuid": "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a", 00:17:36.478 "assigned_rate_limits": { 00:17:36.478 "rw_ios_per_sec": 0, 00:17:36.478 "rw_mbytes_per_sec": 0, 00:17:36.478 "r_mbytes_per_sec": 0, 00:17:36.478 "w_mbytes_per_sec": 0 00:17:36.478 }, 00:17:36.478 "claimed": true, 00:17:36.478 "claim_type": "exclusive_write", 00:17:36.478 "zoned": false, 00:17:36.478 "supported_io_types": { 00:17:36.478 "read": true, 00:17:36.478 "write": true, 00:17:36.478 "unmap": true, 00:17:36.478 "flush": true, 00:17:36.478 "reset": true, 00:17:36.478 "nvme_admin": false, 00:17:36.478 "nvme_io": false, 00:17:36.478 "nvme_io_md": false, 00:17:36.478 "write_zeroes": true, 00:17:36.478 "zcopy": true, 00:17:36.478 "get_zone_info": false, 00:17:36.478 "zone_management": false, 00:17:36.478 "zone_append": false, 00:17:36.478 "compare": false, 00:17:36.478 "compare_and_write": false, 00:17:36.478 "abort": true, 00:17:36.478 "seek_hole": false, 00:17:36.478 "seek_data": false, 00:17:36.478 "copy": true, 00:17:36.478 "nvme_iov_md": false 00:17:36.478 }, 00:17:36.478 "memory_domains": [ 00:17:36.478 { 00:17:36.478 "dma_device_id": "system", 00:17:36.478 "dma_device_type": 1 00:17:36.478 }, 00:17:36.478 { 00:17:36.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.478 "dma_device_type": 2 00:17:36.478 } 00:17:36.478 ], 00:17:36.478 "driver_specific": {} 00:17:36.478 }' 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.478 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.737 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:36.995 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.995 "name": "BaseBdev4", 00:17:36.995 "aliases": [ 00:17:36.995 "9eb0b102-e143-46de-91c0-49594c8e3259" 00:17:36.995 ], 00:17:36.995 "product_name": "Malloc disk", 00:17:36.995 "block_size": 512, 00:17:36.995 "num_blocks": 65536, 00:17:36.995 "uuid": "9eb0b102-e143-46de-91c0-49594c8e3259", 00:17:36.995 "assigned_rate_limits": { 00:17:36.995 "rw_ios_per_sec": 0, 00:17:36.995 "rw_mbytes_per_sec": 0, 00:17:36.995 "r_mbytes_per_sec": 0, 00:17:36.995 "w_mbytes_per_sec": 0 00:17:36.995 }, 00:17:36.995 "claimed": true, 00:17:36.995 "claim_type": "exclusive_write", 00:17:36.995 "zoned": false, 00:17:36.995 "supported_io_types": { 00:17:36.995 "read": true, 00:17:36.995 "write": true, 00:17:36.995 "unmap": true, 00:17:36.995 "flush": true, 00:17:36.995 "reset": true, 00:17:36.995 "nvme_admin": false, 00:17:36.995 "nvme_io": false, 00:17:36.995 "nvme_io_md": false, 00:17:36.995 "write_zeroes": true, 00:17:36.995 "zcopy": true, 00:17:36.995 "get_zone_info": false, 00:17:36.995 "zone_management": false, 00:17:36.995 "zone_append": false, 00:17:36.995 "compare": false, 00:17:36.995 "compare_and_write": false, 00:17:36.995 "abort": true, 00:17:36.995 "seek_hole": false, 00:17:36.995 "seek_data": false, 00:17:36.995 "copy": true, 00:17:36.995 "nvme_iov_md": false 00:17:36.995 }, 00:17:36.995 "memory_domains": [ 00:17:36.995 { 00:17:36.995 "dma_device_id": "system", 00:17:36.995 "dma_device_type": 1 00:17:36.995 }, 00:17:36.995 { 00:17:36.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.995 "dma_device_type": 2 00:17:36.995 } 00:17:36.995 ], 00:17:36.995 "driver_specific": {} 00:17:36.995 }' 00:17:36.995 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.995 11:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.995 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.995 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.995 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.253 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:37.254 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:37.511 [2024-07-25 11:59:23.504103] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.511 [2024-07-25 11:59:23.504126] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.511 [2024-07-25 11:59:23.504178] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.511 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.818 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.818 "name": "Existed_Raid", 00:17:37.818 "uuid": "26809cd3-0e7f-4413-9f1e-d6cfbed4b537", 00:17:37.818 "strip_size_kb": 64, 00:17:37.818 "state": "offline", 00:17:37.818 "raid_level": "raid0", 00:17:37.818 "superblock": true, 00:17:37.818 "num_base_bdevs": 4, 00:17:37.818 "num_base_bdevs_discovered": 3, 00:17:37.818 "num_base_bdevs_operational": 3, 00:17:37.818 "base_bdevs_list": [ 00:17:37.818 { 00:17:37.818 "name": null, 00:17:37.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.818 "is_configured": false, 00:17:37.818 "data_offset": 2048, 00:17:37.818 "data_size": 63488 00:17:37.818 }, 00:17:37.818 { 00:17:37.818 "name": "BaseBdev2", 00:17:37.818 "uuid": "3e70771d-278e-4d58-b36f-b75ba9345ff5", 00:17:37.818 "is_configured": true, 00:17:37.818 "data_offset": 2048, 00:17:37.818 "data_size": 63488 00:17:37.818 }, 00:17:37.818 { 00:17:37.818 "name": "BaseBdev3", 00:17:37.818 "uuid": "cf8f50dd-66d7-4f73-9b3d-e7386aa57d6a", 00:17:37.818 "is_configured": true, 00:17:37.818 "data_offset": 2048, 00:17:37.818 "data_size": 63488 00:17:37.818 }, 00:17:37.818 { 00:17:37.818 "name": "BaseBdev4", 00:17:37.818 "uuid": "9eb0b102-e143-46de-91c0-49594c8e3259", 00:17:37.818 "is_configured": true, 00:17:37.818 "data_offset": 2048, 00:17:37.818 "data_size": 63488 00:17:37.818 } 00:17:37.818 ] 00:17:37.818 }' 00:17:37.818 11:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.818 11:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.385 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:38.385 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:38.385 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.385 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:38.644 [2024-07-25 11:59:24.708297] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.644 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:38.903 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:38.903 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.903 11:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:39.162 [2024-07-25 11:59:25.171503] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:39.162 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:39.162 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:39.162 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.162 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:39.421 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:39.421 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.421 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:39.680 [2024-07-25 11:59:25.622525] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:39.680 [2024-07-25 11:59:25.622561] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1249830 name Existed_Raid, state offline 00:17:39.680 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:39.680 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:39.680 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.680 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.940 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:39.940 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:39.940 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:39.940 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:39.940 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:39.940 11:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:40.199 BaseBdev2 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.199 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.459 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:40.459 [ 00:17:40.459 { 00:17:40.459 "name": "BaseBdev2", 00:17:40.459 "aliases": [ 00:17:40.459 "7014910a-a722-4e8d-b0f9-4d53975976ea" 00:17:40.459 ], 00:17:40.459 "product_name": "Malloc disk", 00:17:40.459 "block_size": 512, 00:17:40.459 "num_blocks": 65536, 00:17:40.459 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:40.459 "assigned_rate_limits": { 00:17:40.459 "rw_ios_per_sec": 0, 00:17:40.459 "rw_mbytes_per_sec": 0, 00:17:40.459 "r_mbytes_per_sec": 0, 00:17:40.459 "w_mbytes_per_sec": 0 00:17:40.459 }, 00:17:40.459 "claimed": false, 00:17:40.459 "zoned": false, 00:17:40.459 "supported_io_types": { 00:17:40.459 "read": true, 00:17:40.459 "write": true, 00:17:40.459 "unmap": true, 00:17:40.459 "flush": true, 00:17:40.459 "reset": true, 00:17:40.459 "nvme_admin": false, 00:17:40.459 "nvme_io": false, 00:17:40.459 "nvme_io_md": false, 00:17:40.459 "write_zeroes": true, 00:17:40.459 "zcopy": true, 00:17:40.459 "get_zone_info": false, 00:17:40.459 "zone_management": false, 00:17:40.459 "zone_append": false, 00:17:40.459 "compare": false, 00:17:40.459 "compare_and_write": false, 00:17:40.459 "abort": true, 00:17:40.459 "seek_hole": false, 00:17:40.459 "seek_data": false, 00:17:40.459 "copy": true, 00:17:40.459 "nvme_iov_md": false 00:17:40.459 }, 00:17:40.459 "memory_domains": [ 00:17:40.459 { 00:17:40.459 "dma_device_id": "system", 00:17:40.459 "dma_device_type": 1 00:17:40.459 }, 00:17:40.459 { 00:17:40.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.459 "dma_device_type": 2 00:17:40.459 } 00:17:40.459 ], 00:17:40.459 "driver_specific": {} 00:17:40.459 } 00:17:40.459 ] 00:17:40.459 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:40.459 11:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:40.459 11:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:40.459 11:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.718 BaseBdev3 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.718 11:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.977 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:41.237 [ 00:17:41.237 { 00:17:41.237 "name": "BaseBdev3", 00:17:41.237 "aliases": [ 00:17:41.237 "2b61bd88-85c9-4967-b289-9d4921377e70" 00:17:41.237 ], 00:17:41.237 "product_name": "Malloc disk", 00:17:41.237 "block_size": 512, 00:17:41.237 "num_blocks": 65536, 00:17:41.237 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:41.237 "assigned_rate_limits": { 00:17:41.237 "rw_ios_per_sec": 0, 00:17:41.237 "rw_mbytes_per_sec": 0, 00:17:41.237 "r_mbytes_per_sec": 0, 00:17:41.237 "w_mbytes_per_sec": 0 00:17:41.237 }, 00:17:41.237 "claimed": false, 00:17:41.237 "zoned": false, 00:17:41.237 "supported_io_types": { 00:17:41.237 "read": true, 00:17:41.237 "write": true, 00:17:41.237 "unmap": true, 00:17:41.237 "flush": true, 00:17:41.237 "reset": true, 00:17:41.237 "nvme_admin": false, 00:17:41.237 "nvme_io": false, 00:17:41.237 "nvme_io_md": false, 00:17:41.237 "write_zeroes": true, 00:17:41.237 "zcopy": true, 00:17:41.237 "get_zone_info": false, 00:17:41.237 "zone_management": false, 00:17:41.237 "zone_append": false, 00:17:41.237 "compare": false, 00:17:41.237 "compare_and_write": false, 00:17:41.237 "abort": true, 00:17:41.237 "seek_hole": false, 00:17:41.237 "seek_data": false, 00:17:41.237 "copy": true, 00:17:41.237 "nvme_iov_md": false 00:17:41.237 }, 00:17:41.237 "memory_domains": [ 00:17:41.237 { 00:17:41.237 "dma_device_id": "system", 00:17:41.237 "dma_device_type": 1 00:17:41.237 }, 00:17:41.237 { 00:17:41.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.237 "dma_device_type": 2 00:17:41.237 } 00:17:41.237 ], 00:17:41.237 "driver_specific": {} 00:17:41.237 } 00:17:41.237 ] 00:17:41.237 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:41.237 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:41.237 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:41.237 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:41.497 BaseBdev4 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:41.497 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.756 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:42.016 [ 00:17:42.016 { 00:17:42.016 "name": "BaseBdev4", 00:17:42.016 "aliases": [ 00:17:42.016 "b310ad70-1884-4899-bd99-02e3bea2358d" 00:17:42.016 ], 00:17:42.016 "product_name": "Malloc disk", 00:17:42.016 "block_size": 512, 00:17:42.016 "num_blocks": 65536, 00:17:42.016 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:42.016 "assigned_rate_limits": { 00:17:42.016 "rw_ios_per_sec": 0, 00:17:42.016 "rw_mbytes_per_sec": 0, 00:17:42.016 "r_mbytes_per_sec": 0, 00:17:42.016 "w_mbytes_per_sec": 0 00:17:42.016 }, 00:17:42.016 "claimed": false, 00:17:42.016 "zoned": false, 00:17:42.016 "supported_io_types": { 00:17:42.016 "read": true, 00:17:42.016 "write": true, 00:17:42.016 "unmap": true, 00:17:42.016 "flush": true, 00:17:42.016 "reset": true, 00:17:42.016 "nvme_admin": false, 00:17:42.016 "nvme_io": false, 00:17:42.016 "nvme_io_md": false, 00:17:42.016 "write_zeroes": true, 00:17:42.016 "zcopy": true, 00:17:42.016 "get_zone_info": false, 00:17:42.016 "zone_management": false, 00:17:42.016 "zone_append": false, 00:17:42.016 "compare": false, 00:17:42.016 "compare_and_write": false, 00:17:42.016 "abort": true, 00:17:42.016 "seek_hole": false, 00:17:42.016 "seek_data": false, 00:17:42.016 "copy": true, 00:17:42.016 "nvme_iov_md": false 00:17:42.016 }, 00:17:42.016 "memory_domains": [ 00:17:42.016 { 00:17:42.016 "dma_device_id": "system", 00:17:42.016 "dma_device_type": 1 00:17:42.016 }, 00:17:42.016 { 00:17:42.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.016 "dma_device_type": 2 00:17:42.016 } 00:17:42.016 ], 00:17:42.016 "driver_specific": {} 00:17:42.016 } 00:17:42.016 ] 00:17:42.016 11:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:42.016 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:42.016 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:42.016 11:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.276 [2024-07-25 11:59:28.136317] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.276 [2024-07-25 11:59:28.136353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.276 [2024-07-25 11:59:28.136370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.277 [2024-07-25 11:59:28.137579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.277 [2024-07-25 11:59:28.137618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.277 "name": "Existed_Raid", 00:17:42.277 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:42.277 "strip_size_kb": 64, 00:17:42.277 "state": "configuring", 00:17:42.277 "raid_level": "raid0", 00:17:42.277 "superblock": true, 00:17:42.277 "num_base_bdevs": 4, 00:17:42.277 "num_base_bdevs_discovered": 3, 00:17:42.277 "num_base_bdevs_operational": 4, 00:17:42.277 "base_bdevs_list": [ 00:17:42.277 { 00:17:42.277 "name": "BaseBdev1", 00:17:42.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.277 "is_configured": false, 00:17:42.277 "data_offset": 0, 00:17:42.277 "data_size": 0 00:17:42.277 }, 00:17:42.277 { 00:17:42.277 "name": "BaseBdev2", 00:17:42.277 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:42.277 "is_configured": true, 00:17:42.277 "data_offset": 2048, 00:17:42.277 "data_size": 63488 00:17:42.277 }, 00:17:42.277 { 00:17:42.277 "name": "BaseBdev3", 00:17:42.277 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:42.277 "is_configured": true, 00:17:42.277 "data_offset": 2048, 00:17:42.277 "data_size": 63488 00:17:42.277 }, 00:17:42.277 { 00:17:42.277 "name": "BaseBdev4", 00:17:42.277 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:42.277 "is_configured": true, 00:17:42.277 "data_offset": 2048, 00:17:42.277 "data_size": 63488 00:17:42.277 } 00:17:42.277 ] 00:17:42.277 }' 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.277 11:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.844 11:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:43.103 [2024-07-25 11:59:29.138939] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.103 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.362 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.362 "name": "Existed_Raid", 00:17:43.362 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:43.362 "strip_size_kb": 64, 00:17:43.362 "state": "configuring", 00:17:43.362 "raid_level": "raid0", 00:17:43.362 "superblock": true, 00:17:43.362 "num_base_bdevs": 4, 00:17:43.362 "num_base_bdevs_discovered": 2, 00:17:43.362 "num_base_bdevs_operational": 4, 00:17:43.362 "base_bdevs_list": [ 00:17:43.362 { 00:17:43.362 "name": "BaseBdev1", 00:17:43.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.362 "is_configured": false, 00:17:43.362 "data_offset": 0, 00:17:43.362 "data_size": 0 00:17:43.362 }, 00:17:43.362 { 00:17:43.362 "name": null, 00:17:43.362 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:43.362 "is_configured": false, 00:17:43.362 "data_offset": 2048, 00:17:43.362 "data_size": 63488 00:17:43.362 }, 00:17:43.362 { 00:17:43.362 "name": "BaseBdev3", 00:17:43.362 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:43.362 "is_configured": true, 00:17:43.362 "data_offset": 2048, 00:17:43.362 "data_size": 63488 00:17:43.362 }, 00:17:43.362 { 00:17:43.362 "name": "BaseBdev4", 00:17:43.362 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:43.362 "is_configured": true, 00:17:43.362 "data_offset": 2048, 00:17:43.362 "data_size": 63488 00:17:43.362 } 00:17:43.362 ] 00:17:43.362 }' 00:17:43.362 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.362 11:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.929 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:43.929 11:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.187 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:44.188 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:44.446 [2024-07-25 11:59:30.417367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.446 BaseBdev1 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:44.446 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.704 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.963 [ 00:17:44.963 { 00:17:44.963 "name": "BaseBdev1", 00:17:44.963 "aliases": [ 00:17:44.963 "55d50f03-fad1-4265-86bc-5a1b255bbe0b" 00:17:44.963 ], 00:17:44.963 "product_name": "Malloc disk", 00:17:44.963 "block_size": 512, 00:17:44.963 "num_blocks": 65536, 00:17:44.964 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:44.964 "assigned_rate_limits": { 00:17:44.964 "rw_ios_per_sec": 0, 00:17:44.964 "rw_mbytes_per_sec": 0, 00:17:44.964 "r_mbytes_per_sec": 0, 00:17:44.964 "w_mbytes_per_sec": 0 00:17:44.964 }, 00:17:44.964 "claimed": true, 00:17:44.964 "claim_type": "exclusive_write", 00:17:44.964 "zoned": false, 00:17:44.964 "supported_io_types": { 00:17:44.964 "read": true, 00:17:44.964 "write": true, 00:17:44.964 "unmap": true, 00:17:44.964 "flush": true, 00:17:44.964 "reset": true, 00:17:44.964 "nvme_admin": false, 00:17:44.964 "nvme_io": false, 00:17:44.964 "nvme_io_md": false, 00:17:44.964 "write_zeroes": true, 00:17:44.964 "zcopy": true, 00:17:44.964 "get_zone_info": false, 00:17:44.964 "zone_management": false, 00:17:44.964 "zone_append": false, 00:17:44.964 "compare": false, 00:17:44.964 "compare_and_write": false, 00:17:44.964 "abort": true, 00:17:44.964 "seek_hole": false, 00:17:44.964 "seek_data": false, 00:17:44.964 "copy": true, 00:17:44.964 "nvme_iov_md": false 00:17:44.964 }, 00:17:44.964 "memory_domains": [ 00:17:44.964 { 00:17:44.964 "dma_device_id": "system", 00:17:44.964 "dma_device_type": 1 00:17:44.964 }, 00:17:44.964 { 00:17:44.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.964 "dma_device_type": 2 00:17:44.964 } 00:17:44.964 ], 00:17:44.964 "driver_specific": {} 00:17:44.964 } 00:17:44.964 ] 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.964 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.222 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.222 "name": "Existed_Raid", 00:17:45.222 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:45.222 "strip_size_kb": 64, 00:17:45.222 "state": "configuring", 00:17:45.222 "raid_level": "raid0", 00:17:45.223 "superblock": true, 00:17:45.223 "num_base_bdevs": 4, 00:17:45.223 "num_base_bdevs_discovered": 3, 00:17:45.223 "num_base_bdevs_operational": 4, 00:17:45.223 "base_bdevs_list": [ 00:17:45.223 { 00:17:45.223 "name": "BaseBdev1", 00:17:45.223 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:45.223 "is_configured": true, 00:17:45.223 "data_offset": 2048, 00:17:45.223 "data_size": 63488 00:17:45.223 }, 00:17:45.223 { 00:17:45.223 "name": null, 00:17:45.223 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:45.223 "is_configured": false, 00:17:45.223 "data_offset": 2048, 00:17:45.223 "data_size": 63488 00:17:45.223 }, 00:17:45.223 { 00:17:45.223 "name": "BaseBdev3", 00:17:45.223 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:45.223 "is_configured": true, 00:17:45.223 "data_offset": 2048, 00:17:45.223 "data_size": 63488 00:17:45.223 }, 00:17:45.223 { 00:17:45.223 "name": "BaseBdev4", 00:17:45.223 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:45.223 "is_configured": true, 00:17:45.223 "data_offset": 2048, 00:17:45.223 "data_size": 63488 00:17:45.223 } 00:17:45.223 ] 00:17:45.223 }' 00:17:45.223 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.223 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.789 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.789 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:45.789 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:45.789 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:46.048 [2024-07-25 11:59:32.101816] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.048 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.307 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.307 "name": "Existed_Raid", 00:17:46.307 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:46.307 "strip_size_kb": 64, 00:17:46.307 "state": "configuring", 00:17:46.307 "raid_level": "raid0", 00:17:46.307 "superblock": true, 00:17:46.307 "num_base_bdevs": 4, 00:17:46.307 "num_base_bdevs_discovered": 2, 00:17:46.307 "num_base_bdevs_operational": 4, 00:17:46.307 "base_bdevs_list": [ 00:17:46.307 { 00:17:46.307 "name": "BaseBdev1", 00:17:46.307 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:46.307 "is_configured": true, 00:17:46.307 "data_offset": 2048, 00:17:46.307 "data_size": 63488 00:17:46.307 }, 00:17:46.307 { 00:17:46.307 "name": null, 00:17:46.307 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:46.307 "is_configured": false, 00:17:46.307 "data_offset": 2048, 00:17:46.307 "data_size": 63488 00:17:46.307 }, 00:17:46.307 { 00:17:46.307 "name": null, 00:17:46.307 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:46.307 "is_configured": false, 00:17:46.307 "data_offset": 2048, 00:17:46.307 "data_size": 63488 00:17:46.307 }, 00:17:46.307 { 00:17:46.307 "name": "BaseBdev4", 00:17:46.307 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:46.307 "is_configured": true, 00:17:46.307 "data_offset": 2048, 00:17:46.307 "data_size": 63488 00:17:46.307 } 00:17:46.307 ] 00:17:46.307 }' 00:17:46.307 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.307 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.939 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.939 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:47.198 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:47.198 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:47.455 [2024-07-25 11:59:33.357209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.455 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.456 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.713 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.713 "name": "Existed_Raid", 00:17:47.713 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:47.713 "strip_size_kb": 64, 00:17:47.713 "state": "configuring", 00:17:47.713 "raid_level": "raid0", 00:17:47.713 "superblock": true, 00:17:47.713 "num_base_bdevs": 4, 00:17:47.713 "num_base_bdevs_discovered": 3, 00:17:47.713 "num_base_bdevs_operational": 4, 00:17:47.713 "base_bdevs_list": [ 00:17:47.713 { 00:17:47.713 "name": "BaseBdev1", 00:17:47.713 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:47.713 "is_configured": true, 00:17:47.713 "data_offset": 2048, 00:17:47.713 "data_size": 63488 00:17:47.713 }, 00:17:47.713 { 00:17:47.713 "name": null, 00:17:47.713 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:47.713 "is_configured": false, 00:17:47.713 "data_offset": 2048, 00:17:47.713 "data_size": 63488 00:17:47.713 }, 00:17:47.713 { 00:17:47.713 "name": "BaseBdev3", 00:17:47.713 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:47.713 "is_configured": true, 00:17:47.713 "data_offset": 2048, 00:17:47.713 "data_size": 63488 00:17:47.713 }, 00:17:47.713 { 00:17:47.713 "name": "BaseBdev4", 00:17:47.713 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:47.713 "is_configured": true, 00:17:47.713 "data_offset": 2048, 00:17:47.713 "data_size": 63488 00:17:47.713 } 00:17:47.713 ] 00:17:47.713 }' 00:17:47.713 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.713 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.282 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.282 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.541 [2024-07-25 11:59:34.612533] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.541 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.799 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.799 "name": "Existed_Raid", 00:17:48.799 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:48.799 "strip_size_kb": 64, 00:17:48.799 "state": "configuring", 00:17:48.799 "raid_level": "raid0", 00:17:48.800 "superblock": true, 00:17:48.800 "num_base_bdevs": 4, 00:17:48.800 "num_base_bdevs_discovered": 2, 00:17:48.800 "num_base_bdevs_operational": 4, 00:17:48.800 "base_bdevs_list": [ 00:17:48.800 { 00:17:48.800 "name": null, 00:17:48.800 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:48.800 "is_configured": false, 00:17:48.800 "data_offset": 2048, 00:17:48.800 "data_size": 63488 00:17:48.800 }, 00:17:48.800 { 00:17:48.800 "name": null, 00:17:48.800 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:48.800 "is_configured": false, 00:17:48.800 "data_offset": 2048, 00:17:48.800 "data_size": 63488 00:17:48.800 }, 00:17:48.800 { 00:17:48.800 "name": "BaseBdev3", 00:17:48.800 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:48.800 "is_configured": true, 00:17:48.800 "data_offset": 2048, 00:17:48.800 "data_size": 63488 00:17:48.800 }, 00:17:48.800 { 00:17:48.800 "name": "BaseBdev4", 00:17:48.800 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:48.800 "is_configured": true, 00:17:48.800 "data_offset": 2048, 00:17:48.800 "data_size": 63488 00:17:48.800 } 00:17:48.800 ] 00:17:48.800 }' 00:17:48.800 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.800 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.366 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.366 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:49.625 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:49.625 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:49.883 [2024-07-25 11:59:35.817836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.883 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:49.883 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.883 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.883 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.884 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.142 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.142 "name": "Existed_Raid", 00:17:50.142 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:50.142 "strip_size_kb": 64, 00:17:50.142 "state": "configuring", 00:17:50.142 "raid_level": "raid0", 00:17:50.142 "superblock": true, 00:17:50.142 "num_base_bdevs": 4, 00:17:50.142 "num_base_bdevs_discovered": 3, 00:17:50.142 "num_base_bdevs_operational": 4, 00:17:50.142 "base_bdevs_list": [ 00:17:50.142 { 00:17:50.142 "name": null, 00:17:50.142 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:50.142 "is_configured": false, 00:17:50.142 "data_offset": 2048, 00:17:50.142 "data_size": 63488 00:17:50.142 }, 00:17:50.142 { 00:17:50.142 "name": "BaseBdev2", 00:17:50.142 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:50.142 "is_configured": true, 00:17:50.142 "data_offset": 2048, 00:17:50.142 "data_size": 63488 00:17:50.142 }, 00:17:50.142 { 00:17:50.142 "name": "BaseBdev3", 00:17:50.142 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:50.142 "is_configured": true, 00:17:50.142 "data_offset": 2048, 00:17:50.142 "data_size": 63488 00:17:50.142 }, 00:17:50.142 { 00:17:50.142 "name": "BaseBdev4", 00:17:50.142 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:50.142 "is_configured": true, 00:17:50.142 "data_offset": 2048, 00:17:50.142 "data_size": 63488 00:17:50.142 } 00:17:50.142 ] 00:17:50.142 }' 00:17:50.142 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.142 11:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.708 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.708 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:50.966 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:50.966 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.966 11:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:51.224 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 55d50f03-fad1-4265-86bc-5a1b255bbe0b 00:17:51.224 [2024-07-25 11:59:37.312936] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:51.224 [2024-07-25 11:59:37.313075] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x123fc90 00:17:51.224 [2024-07-25 11:59:37.313087] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:51.224 [2024-07-25 11:59:37.313251] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1240d40 00:17:51.225 [2024-07-25 11:59:37.313356] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x123fc90 00:17:51.225 [2024-07-25 11:59:37.313365] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x123fc90 00:17:51.225 [2024-07-25 11:59:37.313447] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.225 NewBaseBdev 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:51.225 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.483 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:51.741 [ 00:17:51.741 { 00:17:51.742 "name": "NewBaseBdev", 00:17:51.742 "aliases": [ 00:17:51.742 "55d50f03-fad1-4265-86bc-5a1b255bbe0b" 00:17:51.742 ], 00:17:51.742 "product_name": "Malloc disk", 00:17:51.742 "block_size": 512, 00:17:51.742 "num_blocks": 65536, 00:17:51.742 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:51.742 "assigned_rate_limits": { 00:17:51.742 "rw_ios_per_sec": 0, 00:17:51.742 "rw_mbytes_per_sec": 0, 00:17:51.742 "r_mbytes_per_sec": 0, 00:17:51.742 "w_mbytes_per_sec": 0 00:17:51.742 }, 00:17:51.742 "claimed": true, 00:17:51.742 "claim_type": "exclusive_write", 00:17:51.742 "zoned": false, 00:17:51.742 "supported_io_types": { 00:17:51.742 "read": true, 00:17:51.742 "write": true, 00:17:51.742 "unmap": true, 00:17:51.742 "flush": true, 00:17:51.742 "reset": true, 00:17:51.742 "nvme_admin": false, 00:17:51.742 "nvme_io": false, 00:17:51.742 "nvme_io_md": false, 00:17:51.742 "write_zeroes": true, 00:17:51.742 "zcopy": true, 00:17:51.742 "get_zone_info": false, 00:17:51.742 "zone_management": false, 00:17:51.742 "zone_append": false, 00:17:51.742 "compare": false, 00:17:51.742 "compare_and_write": false, 00:17:51.742 "abort": true, 00:17:51.742 "seek_hole": false, 00:17:51.742 "seek_data": false, 00:17:51.742 "copy": true, 00:17:51.742 "nvme_iov_md": false 00:17:51.742 }, 00:17:51.742 "memory_domains": [ 00:17:51.742 { 00:17:51.742 "dma_device_id": "system", 00:17:51.742 "dma_device_type": 1 00:17:51.742 }, 00:17:51.742 { 00:17:51.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.742 "dma_device_type": 2 00:17:51.742 } 00:17:51.742 ], 00:17:51.742 "driver_specific": {} 00:17:51.742 } 00:17:51.742 ] 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.742 11:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.000 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.000 "name": "Existed_Raid", 00:17:52.000 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:52.000 "strip_size_kb": 64, 00:17:52.000 "state": "online", 00:17:52.000 "raid_level": "raid0", 00:17:52.000 "superblock": true, 00:17:52.000 "num_base_bdevs": 4, 00:17:52.000 "num_base_bdevs_discovered": 4, 00:17:52.000 "num_base_bdevs_operational": 4, 00:17:52.000 "base_bdevs_list": [ 00:17:52.000 { 00:17:52.000 "name": "NewBaseBdev", 00:17:52.000 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:52.000 "is_configured": true, 00:17:52.000 "data_offset": 2048, 00:17:52.000 "data_size": 63488 00:17:52.000 }, 00:17:52.001 { 00:17:52.001 "name": "BaseBdev2", 00:17:52.001 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:52.001 "is_configured": true, 00:17:52.001 "data_offset": 2048, 00:17:52.001 "data_size": 63488 00:17:52.001 }, 00:17:52.001 { 00:17:52.001 "name": "BaseBdev3", 00:17:52.001 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:52.001 "is_configured": true, 00:17:52.001 "data_offset": 2048, 00:17:52.001 "data_size": 63488 00:17:52.001 }, 00:17:52.001 { 00:17:52.001 "name": "BaseBdev4", 00:17:52.001 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:52.001 "is_configured": true, 00:17:52.001 "data_offset": 2048, 00:17:52.001 "data_size": 63488 00:17:52.001 } 00:17:52.001 ] 00:17:52.001 }' 00:17:52.001 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.001 11:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:52.565 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:52.823 [2024-07-25 11:59:38.813214] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.823 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:52.823 "name": "Existed_Raid", 00:17:52.823 "aliases": [ 00:17:52.823 "41222c50-3f13-4c74-9141-5a9bf142422e" 00:17:52.823 ], 00:17:52.823 "product_name": "Raid Volume", 00:17:52.823 "block_size": 512, 00:17:52.823 "num_blocks": 253952, 00:17:52.823 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:52.823 "assigned_rate_limits": { 00:17:52.823 "rw_ios_per_sec": 0, 00:17:52.823 "rw_mbytes_per_sec": 0, 00:17:52.823 "r_mbytes_per_sec": 0, 00:17:52.823 "w_mbytes_per_sec": 0 00:17:52.823 }, 00:17:52.823 "claimed": false, 00:17:52.823 "zoned": false, 00:17:52.823 "supported_io_types": { 00:17:52.823 "read": true, 00:17:52.823 "write": true, 00:17:52.823 "unmap": true, 00:17:52.823 "flush": true, 00:17:52.823 "reset": true, 00:17:52.823 "nvme_admin": false, 00:17:52.823 "nvme_io": false, 00:17:52.823 "nvme_io_md": false, 00:17:52.823 "write_zeroes": true, 00:17:52.823 "zcopy": false, 00:17:52.823 "get_zone_info": false, 00:17:52.823 "zone_management": false, 00:17:52.823 "zone_append": false, 00:17:52.823 "compare": false, 00:17:52.823 "compare_and_write": false, 00:17:52.823 "abort": false, 00:17:52.823 "seek_hole": false, 00:17:52.823 "seek_data": false, 00:17:52.823 "copy": false, 00:17:52.823 "nvme_iov_md": false 00:17:52.823 }, 00:17:52.823 "memory_domains": [ 00:17:52.823 { 00:17:52.823 "dma_device_id": "system", 00:17:52.823 "dma_device_type": 1 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.823 "dma_device_type": 2 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "system", 00:17:52.823 "dma_device_type": 1 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.823 "dma_device_type": 2 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "system", 00:17:52.823 "dma_device_type": 1 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.823 "dma_device_type": 2 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "system", 00:17:52.823 "dma_device_type": 1 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.823 "dma_device_type": 2 00:17:52.823 } 00:17:52.823 ], 00:17:52.823 "driver_specific": { 00:17:52.823 "raid": { 00:17:52.823 "uuid": "41222c50-3f13-4c74-9141-5a9bf142422e", 00:17:52.823 "strip_size_kb": 64, 00:17:52.823 "state": "online", 00:17:52.823 "raid_level": "raid0", 00:17:52.823 "superblock": true, 00:17:52.823 "num_base_bdevs": 4, 00:17:52.823 "num_base_bdevs_discovered": 4, 00:17:52.823 "num_base_bdevs_operational": 4, 00:17:52.823 "base_bdevs_list": [ 00:17:52.823 { 00:17:52.823 "name": "NewBaseBdev", 00:17:52.823 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:52.823 "is_configured": true, 00:17:52.823 "data_offset": 2048, 00:17:52.823 "data_size": 63488 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "name": "BaseBdev2", 00:17:52.823 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:52.823 "is_configured": true, 00:17:52.823 "data_offset": 2048, 00:17:52.823 "data_size": 63488 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "name": "BaseBdev3", 00:17:52.823 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:52.823 "is_configured": true, 00:17:52.823 "data_offset": 2048, 00:17:52.823 "data_size": 63488 00:17:52.823 }, 00:17:52.823 { 00:17:52.823 "name": "BaseBdev4", 00:17:52.823 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:52.823 "is_configured": true, 00:17:52.823 "data_offset": 2048, 00:17:52.823 "data_size": 63488 00:17:52.823 } 00:17:52.823 ] 00:17:52.823 } 00:17:52.823 } 00:17:52.823 }' 00:17:52.823 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.823 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:52.823 BaseBdev2 00:17:52.823 BaseBdev3 00:17:52.823 BaseBdev4' 00:17:52.823 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.823 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:52.823 11:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.081 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.081 "name": "NewBaseBdev", 00:17:53.081 "aliases": [ 00:17:53.081 "55d50f03-fad1-4265-86bc-5a1b255bbe0b" 00:17:53.081 ], 00:17:53.081 "product_name": "Malloc disk", 00:17:53.081 "block_size": 512, 00:17:53.081 "num_blocks": 65536, 00:17:53.081 "uuid": "55d50f03-fad1-4265-86bc-5a1b255bbe0b", 00:17:53.081 "assigned_rate_limits": { 00:17:53.081 "rw_ios_per_sec": 0, 00:17:53.081 "rw_mbytes_per_sec": 0, 00:17:53.081 "r_mbytes_per_sec": 0, 00:17:53.081 "w_mbytes_per_sec": 0 00:17:53.081 }, 00:17:53.081 "claimed": true, 00:17:53.081 "claim_type": "exclusive_write", 00:17:53.081 "zoned": false, 00:17:53.081 "supported_io_types": { 00:17:53.081 "read": true, 00:17:53.081 "write": true, 00:17:53.081 "unmap": true, 00:17:53.081 "flush": true, 00:17:53.081 "reset": true, 00:17:53.081 "nvme_admin": false, 00:17:53.081 "nvme_io": false, 00:17:53.081 "nvme_io_md": false, 00:17:53.081 "write_zeroes": true, 00:17:53.081 "zcopy": true, 00:17:53.081 "get_zone_info": false, 00:17:53.081 "zone_management": false, 00:17:53.081 "zone_append": false, 00:17:53.081 "compare": false, 00:17:53.081 "compare_and_write": false, 00:17:53.081 "abort": true, 00:17:53.081 "seek_hole": false, 00:17:53.081 "seek_data": false, 00:17:53.081 "copy": true, 00:17:53.081 "nvme_iov_md": false 00:17:53.081 }, 00:17:53.081 "memory_domains": [ 00:17:53.081 { 00:17:53.081 "dma_device_id": "system", 00:17:53.081 "dma_device_type": 1 00:17:53.081 }, 00:17:53.081 { 00:17:53.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.081 "dma_device_type": 2 00:17:53.081 } 00:17:53.081 ], 00:17:53.081 "driver_specific": {} 00:17:53.081 }' 00:17:53.081 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.081 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.081 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.081 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.339 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:53.597 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.597 "name": "BaseBdev2", 00:17:53.597 "aliases": [ 00:17:53.597 "7014910a-a722-4e8d-b0f9-4d53975976ea" 00:17:53.597 ], 00:17:53.597 "product_name": "Malloc disk", 00:17:53.597 "block_size": 512, 00:17:53.597 "num_blocks": 65536, 00:17:53.597 "uuid": "7014910a-a722-4e8d-b0f9-4d53975976ea", 00:17:53.597 "assigned_rate_limits": { 00:17:53.597 "rw_ios_per_sec": 0, 00:17:53.597 "rw_mbytes_per_sec": 0, 00:17:53.597 "r_mbytes_per_sec": 0, 00:17:53.597 "w_mbytes_per_sec": 0 00:17:53.597 }, 00:17:53.597 "claimed": true, 00:17:53.597 "claim_type": "exclusive_write", 00:17:53.597 "zoned": false, 00:17:53.597 "supported_io_types": { 00:17:53.597 "read": true, 00:17:53.597 "write": true, 00:17:53.597 "unmap": true, 00:17:53.597 "flush": true, 00:17:53.597 "reset": true, 00:17:53.597 "nvme_admin": false, 00:17:53.597 "nvme_io": false, 00:17:53.597 "nvme_io_md": false, 00:17:53.597 "write_zeroes": true, 00:17:53.597 "zcopy": true, 00:17:53.597 "get_zone_info": false, 00:17:53.597 "zone_management": false, 00:17:53.597 "zone_append": false, 00:17:53.597 "compare": false, 00:17:53.597 "compare_and_write": false, 00:17:53.597 "abort": true, 00:17:53.597 "seek_hole": false, 00:17:53.597 "seek_data": false, 00:17:53.597 "copy": true, 00:17:53.597 "nvme_iov_md": false 00:17:53.597 }, 00:17:53.597 "memory_domains": [ 00:17:53.597 { 00:17:53.597 "dma_device_id": "system", 00:17:53.597 "dma_device_type": 1 00:17:53.597 }, 00:17:53.597 { 00:17:53.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.597 "dma_device_type": 2 00:17:53.597 } 00:17:53.597 ], 00:17:53.597 "driver_specific": {} 00:17:53.597 }' 00:17:53.597 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.597 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.855 11:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.114 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:54.114 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.114 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:54.114 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.372 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.372 "name": "BaseBdev3", 00:17:54.372 "aliases": [ 00:17:54.372 "2b61bd88-85c9-4967-b289-9d4921377e70" 00:17:54.372 ], 00:17:54.372 "product_name": "Malloc disk", 00:17:54.372 "block_size": 512, 00:17:54.372 "num_blocks": 65536, 00:17:54.372 "uuid": "2b61bd88-85c9-4967-b289-9d4921377e70", 00:17:54.373 "assigned_rate_limits": { 00:17:54.373 "rw_ios_per_sec": 0, 00:17:54.373 "rw_mbytes_per_sec": 0, 00:17:54.373 "r_mbytes_per_sec": 0, 00:17:54.373 "w_mbytes_per_sec": 0 00:17:54.373 }, 00:17:54.373 "claimed": true, 00:17:54.373 "claim_type": "exclusive_write", 00:17:54.373 "zoned": false, 00:17:54.373 "supported_io_types": { 00:17:54.373 "read": true, 00:17:54.373 "write": true, 00:17:54.373 "unmap": true, 00:17:54.373 "flush": true, 00:17:54.373 "reset": true, 00:17:54.373 "nvme_admin": false, 00:17:54.373 "nvme_io": false, 00:17:54.373 "nvme_io_md": false, 00:17:54.373 "write_zeroes": true, 00:17:54.373 "zcopy": true, 00:17:54.373 "get_zone_info": false, 00:17:54.373 "zone_management": false, 00:17:54.373 "zone_append": false, 00:17:54.373 "compare": false, 00:17:54.373 "compare_and_write": false, 00:17:54.373 "abort": true, 00:17:54.373 "seek_hole": false, 00:17:54.373 "seek_data": false, 00:17:54.373 "copy": true, 00:17:54.373 "nvme_iov_md": false 00:17:54.373 }, 00:17:54.373 "memory_domains": [ 00:17:54.373 { 00:17:54.373 "dma_device_id": "system", 00:17:54.373 "dma_device_type": 1 00:17:54.373 }, 00:17:54.373 { 00:17:54.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.373 "dma_device_type": 2 00:17:54.373 } 00:17:54.373 ], 00:17:54.373 "driver_specific": {} 00:17:54.373 }' 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.373 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:54.632 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.891 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.891 "name": "BaseBdev4", 00:17:54.891 "aliases": [ 00:17:54.891 "b310ad70-1884-4899-bd99-02e3bea2358d" 00:17:54.891 ], 00:17:54.891 "product_name": "Malloc disk", 00:17:54.891 "block_size": 512, 00:17:54.891 "num_blocks": 65536, 00:17:54.891 "uuid": "b310ad70-1884-4899-bd99-02e3bea2358d", 00:17:54.891 "assigned_rate_limits": { 00:17:54.891 "rw_ios_per_sec": 0, 00:17:54.891 "rw_mbytes_per_sec": 0, 00:17:54.891 "r_mbytes_per_sec": 0, 00:17:54.891 "w_mbytes_per_sec": 0 00:17:54.891 }, 00:17:54.891 "claimed": true, 00:17:54.891 "claim_type": "exclusive_write", 00:17:54.891 "zoned": false, 00:17:54.891 "supported_io_types": { 00:17:54.891 "read": true, 00:17:54.891 "write": true, 00:17:54.891 "unmap": true, 00:17:54.891 "flush": true, 00:17:54.891 "reset": true, 00:17:54.891 "nvme_admin": false, 00:17:54.891 "nvme_io": false, 00:17:54.891 "nvme_io_md": false, 00:17:54.891 "write_zeroes": true, 00:17:54.891 "zcopy": true, 00:17:54.891 "get_zone_info": false, 00:17:54.891 "zone_management": false, 00:17:54.891 "zone_append": false, 00:17:54.891 "compare": false, 00:17:54.891 "compare_and_write": false, 00:17:54.891 "abort": true, 00:17:54.891 "seek_hole": false, 00:17:54.891 "seek_data": false, 00:17:54.892 "copy": true, 00:17:54.892 "nvme_iov_md": false 00:17:54.892 }, 00:17:54.892 "memory_domains": [ 00:17:54.892 { 00:17:54.892 "dma_device_id": "system", 00:17:54.892 "dma_device_type": 1 00:17:54.892 }, 00:17:54.892 { 00:17:54.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.892 "dma_device_type": 2 00:17:54.892 } 00:17:54.892 ], 00:17:54.892 "driver_specific": {} 00:17:54.892 }' 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:54.892 11:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.151 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.151 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.151 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.151 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.151 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.151 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.410 [2024-07-25 11:59:41.355628] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.410 [2024-07-25 11:59:41.355650] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.410 [2024-07-25 11:59:41.355697] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.410 [2024-07-25 11:59:41.355751] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.410 [2024-07-25 11:59:41.355762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x123fc90 name Existed_Raid, state offline 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4171236 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4171236 ']' 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4171236 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4171236 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4171236' 00:17:55.410 killing process with pid 4171236 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4171236 00:17:55.410 [2024-07-25 11:59:41.428706] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.410 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4171236 00:17:55.410 [2024-07-25 11:59:41.461627] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.670 11:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:55.670 00:17:55.670 real 0m31.775s 00:17:55.670 user 0m58.289s 00:17:55.670 sys 0m5.731s 00:17:55.670 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.670 11:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.670 ************************************ 00:17:55.670 END TEST raid_state_function_test_sb 00:17:55.670 ************************************ 00:17:55.670 11:59:41 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:55.670 11:59:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:55.670 11:59:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.670 11:59:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.670 ************************************ 00:17:55.670 START TEST raid_superblock_test 00:17:55.670 ************************************ 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=4177194 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 4177194 /var/tmp/spdk-raid.sock 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 4177194 ']' 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:55.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.670 11:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.930 [2024-07-25 11:59:41.802192] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:17:55.930 [2024-07-25 11:59:41.802250] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177194 ] 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:55.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:55.930 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:55.930 [2024-07-25 11:59:41.934895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.930 [2024-07-25 11:59:42.021245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.190 [2024-07-25 11:59:42.082107] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.190 [2024-07-25 11:59:42.082162] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.758 11:59:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.758 11:59:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.759 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:57.018 malloc1 00:17:57.018 11:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.018 [2024-07-25 11:59:43.134892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.018 [2024-07-25 11:59:43.134933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.018 [2024-07-25 11:59:43.134951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd3e2f0 00:17:57.018 [2024-07-25 11:59:43.134962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.018 [2024-07-25 11:59:43.136406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.018 [2024-07-25 11:59:43.136432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.277 pt1 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.277 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:57.277 malloc2 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.535 [2024-07-25 11:59:43.564480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.535 [2024-07-25 11:59:43.564520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.535 [2024-07-25 11:59:43.564535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd3f6d0 00:17:57.535 [2024-07-25 11:59:43.564546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.535 [2024-07-25 11:59:43.565912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.535 [2024-07-25 11:59:43.565937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.535 pt2 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.535 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:57.793 malloc3 00:17:57.793 11:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.053 [2024-07-25 11:59:44.049925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.053 [2024-07-25 11:59:44.049962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.053 [2024-07-25 11:59:44.049977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xed86b0 00:17:58.053 [2024-07-25 11:59:44.049994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.053 [2024-07-25 11:59:44.051246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.053 [2024-07-25 11:59:44.051271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.053 pt3 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.053 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:58.312 malloc4 00:17:58.312 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.667 [2024-07-25 11:59:44.523509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.667 [2024-07-25 11:59:44.523549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.667 [2024-07-25 11:59:44.523565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xed6370 00:17:58.667 [2024-07-25 11:59:44.523578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.667 [2024-07-25 11:59:44.524921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.667 [2024-07-25 11:59:44.524946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.667 pt4 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:58.667 [2024-07-25 11:59:44.752134] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.667 [2024-07-25 11:59:44.753296] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.667 [2024-07-25 11:59:44.753345] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.667 [2024-07-25 11:59:44.753387] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.667 [2024-07-25 11:59:44.753544] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd37560 00:17:58.667 [2024-07-25 11:59:44.753554] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:58.667 [2024-07-25 11:59:44.753731] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xed7680 00:17:58.667 [2024-07-25 11:59:44.753861] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd37560 00:17:58.667 [2024-07-25 11:59:44.753871] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd37560 00:17:58.667 [2024-07-25 11:59:44.753956] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:58.667 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.668 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.927 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.927 "name": "raid_bdev1", 00:17:58.927 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:17:58.927 "strip_size_kb": 64, 00:17:58.927 "state": "online", 00:17:58.927 "raid_level": "raid0", 00:17:58.927 "superblock": true, 00:17:58.927 "num_base_bdevs": 4, 00:17:58.927 "num_base_bdevs_discovered": 4, 00:17:58.927 "num_base_bdevs_operational": 4, 00:17:58.927 "base_bdevs_list": [ 00:17:58.927 { 00:17:58.927 "name": "pt1", 00:17:58.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.927 "is_configured": true, 00:17:58.927 "data_offset": 2048, 00:17:58.927 "data_size": 63488 00:17:58.927 }, 00:17:58.927 { 00:17:58.927 "name": "pt2", 00:17:58.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.927 "is_configured": true, 00:17:58.927 "data_offset": 2048, 00:17:58.927 "data_size": 63488 00:17:58.927 }, 00:17:58.928 { 00:17:58.928 "name": "pt3", 00:17:58.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.928 "is_configured": true, 00:17:58.928 "data_offset": 2048, 00:17:58.928 "data_size": 63488 00:17:58.928 }, 00:17:58.928 { 00:17:58.928 "name": "pt4", 00:17:58.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.928 "is_configured": true, 00:17:58.928 "data_offset": 2048, 00:17:58.928 "data_size": 63488 00:17:58.928 } 00:17:58.928 ] 00:17:58.928 }' 00:17:58.928 11:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.928 11:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:59.496 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:59.802 [2024-07-25 11:59:45.771071] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.802 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:59.802 "name": "raid_bdev1", 00:17:59.802 "aliases": [ 00:17:59.802 "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336" 00:17:59.802 ], 00:17:59.802 "product_name": "Raid Volume", 00:17:59.802 "block_size": 512, 00:17:59.802 "num_blocks": 253952, 00:17:59.802 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:17:59.802 "assigned_rate_limits": { 00:17:59.802 "rw_ios_per_sec": 0, 00:17:59.802 "rw_mbytes_per_sec": 0, 00:17:59.802 "r_mbytes_per_sec": 0, 00:17:59.802 "w_mbytes_per_sec": 0 00:17:59.802 }, 00:17:59.802 "claimed": false, 00:17:59.802 "zoned": false, 00:17:59.802 "supported_io_types": { 00:17:59.802 "read": true, 00:17:59.802 "write": true, 00:17:59.802 "unmap": true, 00:17:59.802 "flush": true, 00:17:59.802 "reset": true, 00:17:59.802 "nvme_admin": false, 00:17:59.802 "nvme_io": false, 00:17:59.802 "nvme_io_md": false, 00:17:59.802 "write_zeroes": true, 00:17:59.802 "zcopy": false, 00:17:59.802 "get_zone_info": false, 00:17:59.802 "zone_management": false, 00:17:59.802 "zone_append": false, 00:17:59.802 "compare": false, 00:17:59.802 "compare_and_write": false, 00:17:59.802 "abort": false, 00:17:59.802 "seek_hole": false, 00:17:59.802 "seek_data": false, 00:17:59.802 "copy": false, 00:17:59.802 "nvme_iov_md": false 00:17:59.802 }, 00:17:59.802 "memory_domains": [ 00:17:59.802 { 00:17:59.802 "dma_device_id": "system", 00:17:59.802 "dma_device_type": 1 00:17:59.802 }, 00:17:59.802 { 00:17:59.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.802 "dma_device_type": 2 00:17:59.802 }, 00:17:59.802 { 00:17:59.802 "dma_device_id": "system", 00:17:59.802 "dma_device_type": 1 00:17:59.802 }, 00:17:59.802 { 00:17:59.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.802 "dma_device_type": 2 00:17:59.802 }, 00:17:59.802 { 00:17:59.802 "dma_device_id": "system", 00:17:59.802 "dma_device_type": 1 00:17:59.802 }, 00:17:59.803 { 00:17:59.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.803 "dma_device_type": 2 00:17:59.803 }, 00:17:59.803 { 00:17:59.803 "dma_device_id": "system", 00:17:59.803 "dma_device_type": 1 00:17:59.803 }, 00:17:59.803 { 00:17:59.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.803 "dma_device_type": 2 00:17:59.803 } 00:17:59.803 ], 00:17:59.803 "driver_specific": { 00:17:59.803 "raid": { 00:17:59.803 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:17:59.803 "strip_size_kb": 64, 00:17:59.803 "state": "online", 00:17:59.803 "raid_level": "raid0", 00:17:59.803 "superblock": true, 00:17:59.803 "num_base_bdevs": 4, 00:17:59.803 "num_base_bdevs_discovered": 4, 00:17:59.803 "num_base_bdevs_operational": 4, 00:17:59.803 "base_bdevs_list": [ 00:17:59.803 { 00:17:59.803 "name": "pt1", 00:17:59.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.803 "is_configured": true, 00:17:59.803 "data_offset": 2048, 00:17:59.803 "data_size": 63488 00:17:59.803 }, 00:17:59.803 { 00:17:59.803 "name": "pt2", 00:17:59.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.803 "is_configured": true, 00:17:59.803 "data_offset": 2048, 00:17:59.803 "data_size": 63488 00:17:59.803 }, 00:17:59.803 { 00:17:59.803 "name": "pt3", 00:17:59.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.803 "is_configured": true, 00:17:59.803 "data_offset": 2048, 00:17:59.803 "data_size": 63488 00:17:59.803 }, 00:17:59.803 { 00:17:59.803 "name": "pt4", 00:17:59.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.803 "is_configured": true, 00:17:59.803 "data_offset": 2048, 00:17:59.803 "data_size": 63488 00:17:59.803 } 00:17:59.803 ] 00:17:59.803 } 00:17:59.803 } 00:17:59.803 }' 00:17:59.803 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.803 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:59.803 pt2 00:17:59.803 pt3 00:17:59.803 pt4' 00:17:59.803 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.803 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:59.803 11:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.082 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.082 "name": "pt1", 00:18:00.082 "aliases": [ 00:18:00.082 "00000000-0000-0000-0000-000000000001" 00:18:00.082 ], 00:18:00.082 "product_name": "passthru", 00:18:00.082 "block_size": 512, 00:18:00.082 "num_blocks": 65536, 00:18:00.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.082 "assigned_rate_limits": { 00:18:00.082 "rw_ios_per_sec": 0, 00:18:00.082 "rw_mbytes_per_sec": 0, 00:18:00.082 "r_mbytes_per_sec": 0, 00:18:00.082 "w_mbytes_per_sec": 0 00:18:00.082 }, 00:18:00.082 "claimed": true, 00:18:00.082 "claim_type": "exclusive_write", 00:18:00.082 "zoned": false, 00:18:00.082 "supported_io_types": { 00:18:00.082 "read": true, 00:18:00.082 "write": true, 00:18:00.082 "unmap": true, 00:18:00.082 "flush": true, 00:18:00.082 "reset": true, 00:18:00.082 "nvme_admin": false, 00:18:00.082 "nvme_io": false, 00:18:00.082 "nvme_io_md": false, 00:18:00.082 "write_zeroes": true, 00:18:00.082 "zcopy": true, 00:18:00.082 "get_zone_info": false, 00:18:00.082 "zone_management": false, 00:18:00.082 "zone_append": false, 00:18:00.082 "compare": false, 00:18:00.082 "compare_and_write": false, 00:18:00.082 "abort": true, 00:18:00.082 "seek_hole": false, 00:18:00.082 "seek_data": false, 00:18:00.082 "copy": true, 00:18:00.082 "nvme_iov_md": false 00:18:00.082 }, 00:18:00.082 "memory_domains": [ 00:18:00.082 { 00:18:00.082 "dma_device_id": "system", 00:18:00.082 "dma_device_type": 1 00:18:00.082 }, 00:18:00.082 { 00:18:00.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.082 "dma_device_type": 2 00:18:00.082 } 00:18:00.082 ], 00:18:00.082 "driver_specific": { 00:18:00.082 "passthru": { 00:18:00.082 "name": "pt1", 00:18:00.082 "base_bdev_name": "malloc1" 00:18:00.082 } 00:18:00.082 } 00:18:00.082 }' 00:18:00.082 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.082 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.082 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.082 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:00.341 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.600 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.600 "name": "pt2", 00:18:00.600 "aliases": [ 00:18:00.600 "00000000-0000-0000-0000-000000000002" 00:18:00.600 ], 00:18:00.600 "product_name": "passthru", 00:18:00.600 "block_size": 512, 00:18:00.600 "num_blocks": 65536, 00:18:00.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.600 "assigned_rate_limits": { 00:18:00.600 "rw_ios_per_sec": 0, 00:18:00.600 "rw_mbytes_per_sec": 0, 00:18:00.600 "r_mbytes_per_sec": 0, 00:18:00.600 "w_mbytes_per_sec": 0 00:18:00.600 }, 00:18:00.600 "claimed": true, 00:18:00.600 "claim_type": "exclusive_write", 00:18:00.600 "zoned": false, 00:18:00.600 "supported_io_types": { 00:18:00.600 "read": true, 00:18:00.600 "write": true, 00:18:00.600 "unmap": true, 00:18:00.600 "flush": true, 00:18:00.600 "reset": true, 00:18:00.600 "nvme_admin": false, 00:18:00.600 "nvme_io": false, 00:18:00.600 "nvme_io_md": false, 00:18:00.600 "write_zeroes": true, 00:18:00.600 "zcopy": true, 00:18:00.600 "get_zone_info": false, 00:18:00.601 "zone_management": false, 00:18:00.601 "zone_append": false, 00:18:00.601 "compare": false, 00:18:00.601 "compare_and_write": false, 00:18:00.601 "abort": true, 00:18:00.601 "seek_hole": false, 00:18:00.601 "seek_data": false, 00:18:00.601 "copy": true, 00:18:00.601 "nvme_iov_md": false 00:18:00.601 }, 00:18:00.601 "memory_domains": [ 00:18:00.601 { 00:18:00.601 "dma_device_id": "system", 00:18:00.601 "dma_device_type": 1 00:18:00.601 }, 00:18:00.601 { 00:18:00.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.601 "dma_device_type": 2 00:18:00.601 } 00:18:00.601 ], 00:18:00.601 "driver_specific": { 00:18:00.601 "passthru": { 00:18:00.601 "name": "pt2", 00:18:00.601 "base_bdev_name": "malloc2" 00:18:00.601 } 00:18:00.601 } 00:18:00.601 }' 00:18:00.601 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.601 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.860 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.119 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.119 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.119 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:01.119 11:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.119 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.119 "name": "pt3", 00:18:01.119 "aliases": [ 00:18:01.119 "00000000-0000-0000-0000-000000000003" 00:18:01.119 ], 00:18:01.119 "product_name": "passthru", 00:18:01.119 "block_size": 512, 00:18:01.119 "num_blocks": 65536, 00:18:01.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.119 "assigned_rate_limits": { 00:18:01.119 "rw_ios_per_sec": 0, 00:18:01.119 "rw_mbytes_per_sec": 0, 00:18:01.119 "r_mbytes_per_sec": 0, 00:18:01.119 "w_mbytes_per_sec": 0 00:18:01.119 }, 00:18:01.119 "claimed": true, 00:18:01.119 "claim_type": "exclusive_write", 00:18:01.119 "zoned": false, 00:18:01.119 "supported_io_types": { 00:18:01.119 "read": true, 00:18:01.119 "write": true, 00:18:01.119 "unmap": true, 00:18:01.119 "flush": true, 00:18:01.119 "reset": true, 00:18:01.119 "nvme_admin": false, 00:18:01.119 "nvme_io": false, 00:18:01.119 "nvme_io_md": false, 00:18:01.119 "write_zeroes": true, 00:18:01.119 "zcopy": true, 00:18:01.119 "get_zone_info": false, 00:18:01.119 "zone_management": false, 00:18:01.119 "zone_append": false, 00:18:01.119 "compare": false, 00:18:01.119 "compare_and_write": false, 00:18:01.119 "abort": true, 00:18:01.119 "seek_hole": false, 00:18:01.119 "seek_data": false, 00:18:01.119 "copy": true, 00:18:01.119 "nvme_iov_md": false 00:18:01.119 }, 00:18:01.119 "memory_domains": [ 00:18:01.119 { 00:18:01.119 "dma_device_id": "system", 00:18:01.119 "dma_device_type": 1 00:18:01.119 }, 00:18:01.119 { 00:18:01.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.119 "dma_device_type": 2 00:18:01.119 } 00:18:01.119 ], 00:18:01.119 "driver_specific": { 00:18:01.119 "passthru": { 00:18:01.119 "name": "pt3", 00:18:01.119 "base_bdev_name": "malloc3" 00:18:01.119 } 00:18:01.119 } 00:18:01.119 }' 00:18:01.119 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.378 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.637 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.637 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.637 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.637 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:01.637 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.897 "name": "pt4", 00:18:01.897 "aliases": [ 00:18:01.897 "00000000-0000-0000-0000-000000000004" 00:18:01.897 ], 00:18:01.897 "product_name": "passthru", 00:18:01.897 "block_size": 512, 00:18:01.897 "num_blocks": 65536, 00:18:01.897 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.897 "assigned_rate_limits": { 00:18:01.897 "rw_ios_per_sec": 0, 00:18:01.897 "rw_mbytes_per_sec": 0, 00:18:01.897 "r_mbytes_per_sec": 0, 00:18:01.897 "w_mbytes_per_sec": 0 00:18:01.897 }, 00:18:01.897 "claimed": true, 00:18:01.897 "claim_type": "exclusive_write", 00:18:01.897 "zoned": false, 00:18:01.897 "supported_io_types": { 00:18:01.897 "read": true, 00:18:01.897 "write": true, 00:18:01.897 "unmap": true, 00:18:01.897 "flush": true, 00:18:01.897 "reset": true, 00:18:01.897 "nvme_admin": false, 00:18:01.897 "nvme_io": false, 00:18:01.897 "nvme_io_md": false, 00:18:01.897 "write_zeroes": true, 00:18:01.897 "zcopy": true, 00:18:01.897 "get_zone_info": false, 00:18:01.897 "zone_management": false, 00:18:01.897 "zone_append": false, 00:18:01.897 "compare": false, 00:18:01.897 "compare_and_write": false, 00:18:01.897 "abort": true, 00:18:01.897 "seek_hole": false, 00:18:01.897 "seek_data": false, 00:18:01.897 "copy": true, 00:18:01.897 "nvme_iov_md": false 00:18:01.897 }, 00:18:01.897 "memory_domains": [ 00:18:01.897 { 00:18:01.897 "dma_device_id": "system", 00:18:01.897 "dma_device_type": 1 00:18:01.897 }, 00:18:01.897 { 00:18:01.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.897 "dma_device_type": 2 00:18:01.897 } 00:18:01.897 ], 00:18:01.897 "driver_specific": { 00:18:01.897 "passthru": { 00:18:01.897 "name": "pt4", 00:18:01.897 "base_bdev_name": "malloc4" 00:18:01.897 } 00:18:01.897 } 00:18:01.897 }' 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.897 11:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.897 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.157 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.157 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.157 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.157 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.157 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:02.157 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:02.416 [2024-07-25 11:59:48.337831] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.416 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0a7b30bf-4fa3-41d4-8863-fe2c9d19e336 00:18:02.416 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0a7b30bf-4fa3-41d4-8863-fe2c9d19e336 ']' 00:18:02.416 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:02.675 [2024-07-25 11:59:48.570157] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.675 [2024-07-25 11:59:48.570176] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.675 [2024-07-25 11:59:48.570221] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.675 [2024-07-25 11:59:48.570278] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.675 [2024-07-25 11:59:48.570288] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd37560 name raid_bdev1, state offline 00:18:02.675 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.675 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:02.934 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:02.934 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:02.934 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.934 11:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:02.934 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.934 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:03.193 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.193 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:03.452 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.452 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:03.711 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:03.711 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:03.970 11:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:04.229 [2024-07-25 11:59:50.174333] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:04.229 [2024-07-25 11:59:50.175591] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:04.229 [2024-07-25 11:59:50.175632] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:04.229 [2024-07-25 11:59:50.175662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:04.229 [2024-07-25 11:59:50.175704] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:04.229 [2024-07-25 11:59:50.175742] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:04.229 [2024-07-25 11:59:50.175763] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:04.229 [2024-07-25 11:59:50.175784] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:04.229 [2024-07-25 11:59:50.175801] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.229 [2024-07-25 11:59:50.175810] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xee1d50 name raid_bdev1, state configuring 00:18:04.229 request: 00:18:04.229 { 00:18:04.229 "name": "raid_bdev1", 00:18:04.229 "raid_level": "raid0", 00:18:04.229 "base_bdevs": [ 00:18:04.229 "malloc1", 00:18:04.229 "malloc2", 00:18:04.229 "malloc3", 00:18:04.229 "malloc4" 00:18:04.229 ], 00:18:04.229 "strip_size_kb": 64, 00:18:04.229 "superblock": false, 00:18:04.229 "method": "bdev_raid_create", 00:18:04.229 "req_id": 1 00:18:04.229 } 00:18:04.229 Got JSON-RPC error response 00:18:04.229 response: 00:18:04.229 { 00:18:04.229 "code": -17, 00:18:04.229 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:04.229 } 00:18:04.229 11:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:04.229 11:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.229 11:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.229 11:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.229 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.229 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:04.486 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:04.486 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:04.486 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.744 [2024-07-25 11:59:50.635473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.744 [2024-07-25 11:59:50.635511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.744 [2024-07-25 11:59:50.635527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xee13f0 00:18:04.744 [2024-07-25 11:59:50.635539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.744 [2024-07-25 11:59:50.636843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.744 [2024-07-25 11:59:50.636869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.744 [2024-07-25 11:59:50.636924] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:04.744 [2024-07-25 11:59:50.636947] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.744 pt1 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.744 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.001 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.001 "name": "raid_bdev1", 00:18:05.001 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:18:05.001 "strip_size_kb": 64, 00:18:05.001 "state": "configuring", 00:18:05.001 "raid_level": "raid0", 00:18:05.001 "superblock": true, 00:18:05.001 "num_base_bdevs": 4, 00:18:05.001 "num_base_bdevs_discovered": 1, 00:18:05.001 "num_base_bdevs_operational": 4, 00:18:05.001 "base_bdevs_list": [ 00:18:05.001 { 00:18:05.001 "name": "pt1", 00:18:05.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.001 "is_configured": true, 00:18:05.001 "data_offset": 2048, 00:18:05.001 "data_size": 63488 00:18:05.001 }, 00:18:05.001 { 00:18:05.001 "name": null, 00:18:05.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.001 "is_configured": false, 00:18:05.001 "data_offset": 2048, 00:18:05.001 "data_size": 63488 00:18:05.001 }, 00:18:05.001 { 00:18:05.001 "name": null, 00:18:05.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.001 "is_configured": false, 00:18:05.001 "data_offset": 2048, 00:18:05.001 "data_size": 63488 00:18:05.001 }, 00:18:05.001 { 00:18:05.001 "name": null, 00:18:05.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.001 "is_configured": false, 00:18:05.001 "data_offset": 2048, 00:18:05.001 "data_size": 63488 00:18:05.001 } 00:18:05.001 ] 00:18:05.001 }' 00:18:05.001 11:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.001 11:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.566 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:05.566 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.566 [2024-07-25 11:59:51.654309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.566 [2024-07-25 11:59:51.654350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.566 [2024-07-25 11:59:51.654367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd386e0 00:18:05.566 [2024-07-25 11:59:51.654379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.566 [2024-07-25 11:59:51.654672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.566 [2024-07-25 11:59:51.654688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.566 [2024-07-25 11:59:51.654740] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:05.566 [2024-07-25 11:59:51.654757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.566 pt2 00:18:05.566 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:05.824 [2024-07-25 11:59:51.878912] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.824 11:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.082 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.082 "name": "raid_bdev1", 00:18:06.082 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:18:06.082 "strip_size_kb": 64, 00:18:06.082 "state": "configuring", 00:18:06.082 "raid_level": "raid0", 00:18:06.082 "superblock": true, 00:18:06.082 "num_base_bdevs": 4, 00:18:06.082 "num_base_bdevs_discovered": 1, 00:18:06.082 "num_base_bdevs_operational": 4, 00:18:06.082 "base_bdevs_list": [ 00:18:06.082 { 00:18:06.082 "name": "pt1", 00:18:06.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.082 "is_configured": true, 00:18:06.082 "data_offset": 2048, 00:18:06.082 "data_size": 63488 00:18:06.082 }, 00:18:06.082 { 00:18:06.082 "name": null, 00:18:06.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.082 "is_configured": false, 00:18:06.082 "data_offset": 2048, 00:18:06.082 "data_size": 63488 00:18:06.082 }, 00:18:06.082 { 00:18:06.082 "name": null, 00:18:06.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.082 "is_configured": false, 00:18:06.082 "data_offset": 2048, 00:18:06.082 "data_size": 63488 00:18:06.082 }, 00:18:06.082 { 00:18:06.082 "name": null, 00:18:06.082 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.082 "is_configured": false, 00:18:06.082 "data_offset": 2048, 00:18:06.082 "data_size": 63488 00:18:06.082 } 00:18:06.082 ] 00:18:06.082 }' 00:18:06.082 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.082 11:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.647 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:06.647 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:06.647 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.905 [2024-07-25 11:59:52.881536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.905 [2024-07-25 11:59:52.881577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.905 [2024-07-25 11:59:52.881593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd38910 00:18:06.905 [2024-07-25 11:59:52.881605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.905 [2024-07-25 11:59:52.881904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.905 [2024-07-25 11:59:52.881919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.905 [2024-07-25 11:59:52.881973] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:06.905 [2024-07-25 11:59:52.881990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.905 pt2 00:18:06.905 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:06.905 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:06.905 11:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:07.163 [2024-07-25 11:59:53.106118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:07.163 [2024-07-25 11:59:53.106148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.163 [2024-07-25 11:59:53.106161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xed6ac0 00:18:07.163 [2024-07-25 11:59:53.106171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.163 [2024-07-25 11:59:53.106414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.163 [2024-07-25 11:59:53.106429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:07.163 [2024-07-25 11:59:53.106471] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:07.163 [2024-07-25 11:59:53.106485] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:07.163 pt3 00:18:07.163 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:07.163 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:07.163 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:07.420 [2024-07-25 11:59:53.334723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:07.420 [2024-07-25 11:59:53.334750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.420 [2024-07-25 11:59:53.334763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd34f60 00:18:07.420 [2024-07-25 11:59:53.334774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.420 [2024-07-25 11:59:53.335010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.420 [2024-07-25 11:59:53.335025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:07.420 [2024-07-25 11:59:53.335067] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:07.420 [2024-07-25 11:59:53.335083] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:07.420 [2024-07-25 11:59:53.335194] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd38bc0 00:18:07.420 [2024-07-25 11:59:53.335203] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:07.420 [2024-07-25 11:59:53.335357] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd3df70 00:18:07.420 [2024-07-25 11:59:53.335471] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd38bc0 00:18:07.420 [2024-07-25 11:59:53.335480] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd38bc0 00:18:07.420 [2024-07-25 11:59:53.335570] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.420 pt4 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.420 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.678 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.678 "name": "raid_bdev1", 00:18:07.678 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:18:07.678 "strip_size_kb": 64, 00:18:07.678 "state": "online", 00:18:07.678 "raid_level": "raid0", 00:18:07.678 "superblock": true, 00:18:07.678 "num_base_bdevs": 4, 00:18:07.678 "num_base_bdevs_discovered": 4, 00:18:07.678 "num_base_bdevs_operational": 4, 00:18:07.678 "base_bdevs_list": [ 00:18:07.678 { 00:18:07.678 "name": "pt1", 00:18:07.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.678 "is_configured": true, 00:18:07.678 "data_offset": 2048, 00:18:07.678 "data_size": 63488 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "name": "pt2", 00:18:07.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.678 "is_configured": true, 00:18:07.678 "data_offset": 2048, 00:18:07.678 "data_size": 63488 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "name": "pt3", 00:18:07.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.678 "is_configured": true, 00:18:07.678 "data_offset": 2048, 00:18:07.678 "data_size": 63488 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "name": "pt4", 00:18:07.678 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:07.678 "is_configured": true, 00:18:07.678 "data_offset": 2048, 00:18:07.678 "data_size": 63488 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }' 00:18:07.678 11:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.678 11:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:08.244 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:08.244 [2024-07-25 11:59:54.361725] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.503 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:08.503 "name": "raid_bdev1", 00:18:08.503 "aliases": [ 00:18:08.503 "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336" 00:18:08.503 ], 00:18:08.503 "product_name": "Raid Volume", 00:18:08.503 "block_size": 512, 00:18:08.503 "num_blocks": 253952, 00:18:08.503 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:18:08.503 "assigned_rate_limits": { 00:18:08.503 "rw_ios_per_sec": 0, 00:18:08.503 "rw_mbytes_per_sec": 0, 00:18:08.503 "r_mbytes_per_sec": 0, 00:18:08.503 "w_mbytes_per_sec": 0 00:18:08.503 }, 00:18:08.503 "claimed": false, 00:18:08.503 "zoned": false, 00:18:08.503 "supported_io_types": { 00:18:08.503 "read": true, 00:18:08.503 "write": true, 00:18:08.503 "unmap": true, 00:18:08.503 "flush": true, 00:18:08.503 "reset": true, 00:18:08.503 "nvme_admin": false, 00:18:08.503 "nvme_io": false, 00:18:08.503 "nvme_io_md": false, 00:18:08.503 "write_zeroes": true, 00:18:08.503 "zcopy": false, 00:18:08.503 "get_zone_info": false, 00:18:08.503 "zone_management": false, 00:18:08.503 "zone_append": false, 00:18:08.503 "compare": false, 00:18:08.503 "compare_and_write": false, 00:18:08.503 "abort": false, 00:18:08.503 "seek_hole": false, 00:18:08.503 "seek_data": false, 00:18:08.503 "copy": false, 00:18:08.503 "nvme_iov_md": false 00:18:08.503 }, 00:18:08.503 "memory_domains": [ 00:18:08.503 { 00:18:08.503 "dma_device_id": "system", 00:18:08.503 "dma_device_type": 1 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.503 "dma_device_type": 2 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "system", 00:18:08.503 "dma_device_type": 1 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.503 "dma_device_type": 2 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "system", 00:18:08.503 "dma_device_type": 1 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.503 "dma_device_type": 2 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "system", 00:18:08.503 "dma_device_type": 1 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.503 "dma_device_type": 2 00:18:08.503 } 00:18:08.503 ], 00:18:08.503 "driver_specific": { 00:18:08.503 "raid": { 00:18:08.503 "uuid": "0a7b30bf-4fa3-41d4-8863-fe2c9d19e336", 00:18:08.503 "strip_size_kb": 64, 00:18:08.503 "state": "online", 00:18:08.503 "raid_level": "raid0", 00:18:08.503 "superblock": true, 00:18:08.503 "num_base_bdevs": 4, 00:18:08.503 "num_base_bdevs_discovered": 4, 00:18:08.503 "num_base_bdevs_operational": 4, 00:18:08.503 "base_bdevs_list": [ 00:18:08.503 { 00:18:08.503 "name": "pt1", 00:18:08.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.503 "is_configured": true, 00:18:08.503 "data_offset": 2048, 00:18:08.503 "data_size": 63488 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "name": "pt2", 00:18:08.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.503 "is_configured": true, 00:18:08.503 "data_offset": 2048, 00:18:08.503 "data_size": 63488 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "name": "pt3", 00:18:08.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.503 "is_configured": true, 00:18:08.503 "data_offset": 2048, 00:18:08.503 "data_size": 63488 00:18:08.503 }, 00:18:08.503 { 00:18:08.503 "name": "pt4", 00:18:08.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.503 "is_configured": true, 00:18:08.503 "data_offset": 2048, 00:18:08.503 "data_size": 63488 00:18:08.503 } 00:18:08.503 ] 00:18:08.503 } 00:18:08.503 } 00:18:08.503 }' 00:18:08.503 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:08.503 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:08.503 pt2 00:18:08.503 pt3 00:18:08.503 pt4' 00:18:08.503 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:08.503 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:08.503 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:08.761 "name": "pt1", 00:18:08.761 "aliases": [ 00:18:08.761 "00000000-0000-0000-0000-000000000001" 00:18:08.761 ], 00:18:08.761 "product_name": "passthru", 00:18:08.761 "block_size": 512, 00:18:08.761 "num_blocks": 65536, 00:18:08.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.761 "assigned_rate_limits": { 00:18:08.761 "rw_ios_per_sec": 0, 00:18:08.761 "rw_mbytes_per_sec": 0, 00:18:08.761 "r_mbytes_per_sec": 0, 00:18:08.761 "w_mbytes_per_sec": 0 00:18:08.761 }, 00:18:08.761 "claimed": true, 00:18:08.761 "claim_type": "exclusive_write", 00:18:08.761 "zoned": false, 00:18:08.761 "supported_io_types": { 00:18:08.761 "read": true, 00:18:08.761 "write": true, 00:18:08.761 "unmap": true, 00:18:08.761 "flush": true, 00:18:08.761 "reset": true, 00:18:08.761 "nvme_admin": false, 00:18:08.761 "nvme_io": false, 00:18:08.761 "nvme_io_md": false, 00:18:08.761 "write_zeroes": true, 00:18:08.761 "zcopy": true, 00:18:08.761 "get_zone_info": false, 00:18:08.761 "zone_management": false, 00:18:08.761 "zone_append": false, 00:18:08.761 "compare": false, 00:18:08.761 "compare_and_write": false, 00:18:08.761 "abort": true, 00:18:08.761 "seek_hole": false, 00:18:08.761 "seek_data": false, 00:18:08.761 "copy": true, 00:18:08.761 "nvme_iov_md": false 00:18:08.761 }, 00:18:08.761 "memory_domains": [ 00:18:08.761 { 00:18:08.761 "dma_device_id": "system", 00:18:08.761 "dma_device_type": 1 00:18:08.761 }, 00:18:08.761 { 00:18:08.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.761 "dma_device_type": 2 00:18:08.761 } 00:18:08.761 ], 00:18:08.761 "driver_specific": { 00:18:08.761 "passthru": { 00:18:08.761 "name": "pt1", 00:18:08.761 "base_bdev_name": "malloc1" 00:18:08.761 } 00:18:08.761 } 00:18:08.761 }' 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:08.761 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.019 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:09.019 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.019 11:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.019 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:09.019 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.019 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:09.019 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:09.277 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:09.277 "name": "pt2", 00:18:09.277 "aliases": [ 00:18:09.277 "00000000-0000-0000-0000-000000000002" 00:18:09.277 ], 00:18:09.277 "product_name": "passthru", 00:18:09.277 "block_size": 512, 00:18:09.277 "num_blocks": 65536, 00:18:09.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.277 "assigned_rate_limits": { 00:18:09.277 "rw_ios_per_sec": 0, 00:18:09.277 "rw_mbytes_per_sec": 0, 00:18:09.277 "r_mbytes_per_sec": 0, 00:18:09.277 "w_mbytes_per_sec": 0 00:18:09.277 }, 00:18:09.277 "claimed": true, 00:18:09.277 "claim_type": "exclusive_write", 00:18:09.277 "zoned": false, 00:18:09.277 "supported_io_types": { 00:18:09.277 "read": true, 00:18:09.277 "write": true, 00:18:09.277 "unmap": true, 00:18:09.277 "flush": true, 00:18:09.277 "reset": true, 00:18:09.277 "nvme_admin": false, 00:18:09.277 "nvme_io": false, 00:18:09.277 "nvme_io_md": false, 00:18:09.277 "write_zeroes": true, 00:18:09.277 "zcopy": true, 00:18:09.277 "get_zone_info": false, 00:18:09.277 "zone_management": false, 00:18:09.277 "zone_append": false, 00:18:09.277 "compare": false, 00:18:09.277 "compare_and_write": false, 00:18:09.277 "abort": true, 00:18:09.277 "seek_hole": false, 00:18:09.277 "seek_data": false, 00:18:09.277 "copy": true, 00:18:09.277 "nvme_iov_md": false 00:18:09.277 }, 00:18:09.277 "memory_domains": [ 00:18:09.277 { 00:18:09.277 "dma_device_id": "system", 00:18:09.277 "dma_device_type": 1 00:18:09.277 }, 00:18:09.277 { 00:18:09.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.277 "dma_device_type": 2 00:18:09.277 } 00:18:09.277 ], 00:18:09.277 "driver_specific": { 00:18:09.277 "passthru": { 00:18:09.277 "name": "pt2", 00:18:09.277 "base_bdev_name": "malloc2" 00:18:09.277 } 00:18:09.277 } 00:18:09.277 }' 00:18:09.277 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.277 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.277 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:09.277 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.277 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:09.535 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:09.793 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:09.793 "name": "pt3", 00:18:09.793 "aliases": [ 00:18:09.793 "00000000-0000-0000-0000-000000000003" 00:18:09.793 ], 00:18:09.793 "product_name": "passthru", 00:18:09.793 "block_size": 512, 00:18:09.793 "num_blocks": 65536, 00:18:09.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.793 "assigned_rate_limits": { 00:18:09.793 "rw_ios_per_sec": 0, 00:18:09.793 "rw_mbytes_per_sec": 0, 00:18:09.793 "r_mbytes_per_sec": 0, 00:18:09.793 "w_mbytes_per_sec": 0 00:18:09.793 }, 00:18:09.793 "claimed": true, 00:18:09.793 "claim_type": "exclusive_write", 00:18:09.793 "zoned": false, 00:18:09.793 "supported_io_types": { 00:18:09.793 "read": true, 00:18:09.793 "write": true, 00:18:09.793 "unmap": true, 00:18:09.793 "flush": true, 00:18:09.793 "reset": true, 00:18:09.793 "nvme_admin": false, 00:18:09.793 "nvme_io": false, 00:18:09.793 "nvme_io_md": false, 00:18:09.793 "write_zeroes": true, 00:18:09.793 "zcopy": true, 00:18:09.793 "get_zone_info": false, 00:18:09.793 "zone_management": false, 00:18:09.793 "zone_append": false, 00:18:09.793 "compare": false, 00:18:09.793 "compare_and_write": false, 00:18:09.793 "abort": true, 00:18:09.793 "seek_hole": false, 00:18:09.793 "seek_data": false, 00:18:09.793 "copy": true, 00:18:09.793 "nvme_iov_md": false 00:18:09.793 }, 00:18:09.793 "memory_domains": [ 00:18:09.793 { 00:18:09.793 "dma_device_id": "system", 00:18:09.793 "dma_device_type": 1 00:18:09.793 }, 00:18:09.793 { 00:18:09.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.793 "dma_device_type": 2 00:18:09.793 } 00:18:09.793 ], 00:18:09.793 "driver_specific": { 00:18:09.793 "passthru": { 00:18:09.793 "name": "pt3", 00:18:09.793 "base_bdev_name": "malloc3" 00:18:09.793 } 00:18:09.793 } 00:18:09.793 }' 00:18:09.793 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.793 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.793 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:09.793 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.050 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.050 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.050 11:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:10.050 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.308 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.308 "name": "pt4", 00:18:10.308 "aliases": [ 00:18:10.308 "00000000-0000-0000-0000-000000000004" 00:18:10.308 ], 00:18:10.309 "product_name": "passthru", 00:18:10.309 "block_size": 512, 00:18:10.309 "num_blocks": 65536, 00:18:10.309 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.309 "assigned_rate_limits": { 00:18:10.309 "rw_ios_per_sec": 0, 00:18:10.309 "rw_mbytes_per_sec": 0, 00:18:10.309 "r_mbytes_per_sec": 0, 00:18:10.309 "w_mbytes_per_sec": 0 00:18:10.309 }, 00:18:10.309 "claimed": true, 00:18:10.309 "claim_type": "exclusive_write", 00:18:10.309 "zoned": false, 00:18:10.309 "supported_io_types": { 00:18:10.309 "read": true, 00:18:10.309 "write": true, 00:18:10.309 "unmap": true, 00:18:10.309 "flush": true, 00:18:10.309 "reset": true, 00:18:10.309 "nvme_admin": false, 00:18:10.309 "nvme_io": false, 00:18:10.309 "nvme_io_md": false, 00:18:10.309 "write_zeroes": true, 00:18:10.309 "zcopy": true, 00:18:10.309 "get_zone_info": false, 00:18:10.309 "zone_management": false, 00:18:10.309 "zone_append": false, 00:18:10.309 "compare": false, 00:18:10.309 "compare_and_write": false, 00:18:10.309 "abort": true, 00:18:10.309 "seek_hole": false, 00:18:10.309 "seek_data": false, 00:18:10.309 "copy": true, 00:18:10.309 "nvme_iov_md": false 00:18:10.309 }, 00:18:10.309 "memory_domains": [ 00:18:10.309 { 00:18:10.309 "dma_device_id": "system", 00:18:10.309 "dma_device_type": 1 00:18:10.309 }, 00:18:10.309 { 00:18:10.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.309 "dma_device_type": 2 00:18:10.309 } 00:18:10.309 ], 00:18:10.309 "driver_specific": { 00:18:10.309 "passthru": { 00:18:10.309 "name": "pt4", 00:18:10.309 "base_bdev_name": "malloc4" 00:18:10.309 } 00:18:10.309 } 00:18:10.309 }' 00:18:10.309 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.567 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.824 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.824 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:10.824 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:10.824 [2024-07-25 11:59:56.936833] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0a7b30bf-4fa3-41d4-8863-fe2c9d19e336 '!=' 0a7b30bf-4fa3-41d4-8863-fe2c9d19e336 ']' 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 4177194 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 4177194 ']' 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 4177194 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.082 11:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4177194 00:18:11.082 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.082 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.082 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4177194' 00:18:11.082 killing process with pid 4177194 00:18:11.082 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 4177194 00:18:11.082 [2024-07-25 11:59:57.017654] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.082 [2024-07-25 11:59:57.017707] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.082 [2024-07-25 11:59:57.017765] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.082 [2024-07-25 11:59:57.017775] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd38bc0 name raid_bdev1, state offline 00:18:11.082 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 4177194 00:18:11.082 [2024-07-25 11:59:57.050965] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.341 11:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:11.341 00:18:11.341 real 0m15.497s 00:18:11.341 user 0m27.939s 00:18:11.341 sys 0m2.828s 00:18:11.341 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.341 11:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.341 ************************************ 00:18:11.341 END TEST raid_superblock_test 00:18:11.341 ************************************ 00:18:11.341 11:59:57 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:18:11.341 11:59:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:11.341 11:59:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.341 11:59:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.341 ************************************ 00:18:11.341 START TEST raid_read_error_test 00:18:11.341 ************************************ 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BLEHAyKXYM 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4180156 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4180156 /var/tmp/spdk-raid.sock 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4180156 ']' 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:11.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.341 11:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.341 [2024-07-25 11:59:57.395079] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:18:11.341 [2024-07-25 11:59:57.395149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180156 ] 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:11.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.600 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:11.600 [2024-07-25 11:59:57.525314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.600 [2024-07-25 11:59:57.611400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.600 [2024-07-25 11:59:57.666515] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.600 [2024-07-25 11:59:57.666543] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.531 11:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.531 11:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:12.532 11:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:12.532 11:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:12.532 BaseBdev1_malloc 00:18:12.532 11:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:12.821 true 00:18:12.821 11:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:13.387 [2024-07-25 11:59:59.211458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:13.387 [2024-07-25 11:59:59.211499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.387 [2024-07-25 11:59:59.211516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb59190 00:18:13.387 [2024-07-25 11:59:59.211528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.387 [2024-07-25 11:59:59.213129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.387 [2024-07-25 11:59:59.213163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:13.387 BaseBdev1 00:18:13.387 11:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:13.387 11:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:13.387 BaseBdev2_malloc 00:18:13.387 11:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:13.645 true 00:18:13.645 11:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:13.903 [2024-07-25 11:59:59.909658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:13.903 [2024-07-25 11:59:59.909697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.903 [2024-07-25 11:59:59.909714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb5de20 00:18:13.903 [2024-07-25 11:59:59.909726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.904 [2024-07-25 11:59:59.911101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.904 [2024-07-25 11:59:59.911128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:13.904 BaseBdev2 00:18:13.904 11:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:13.904 11:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:14.160 BaseBdev3_malloc 00:18:14.160 12:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:14.418 true 00:18:14.418 12:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:14.676 [2024-07-25 12:00:00.587693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:14.676 [2024-07-25 12:00:00.587737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.676 [2024-07-25 12:00:00.587761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb5ed90 00:18:14.676 [2024-07-25 12:00:00.587772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.676 [2024-07-25 12:00:00.589233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.676 [2024-07-25 12:00:00.589259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:14.676 BaseBdev3 00:18:14.676 12:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:14.676 12:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:14.935 BaseBdev4_malloc 00:18:14.935 12:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:14.935 true 00:18:15.192 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:15.192 [2024-07-25 12:00:01.269739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:15.192 [2024-07-25 12:00:01.269780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.192 [2024-07-25 12:00:01.269798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb61000 00:18:15.192 [2024-07-25 12:00:01.269810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.192 [2024-07-25 12:00:01.271211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.192 [2024-07-25 12:00:01.271237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:15.192 BaseBdev4 00:18:15.192 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:15.449 [2024-07-25 12:00:01.498386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.449 [2024-07-25 12:00:01.499559] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.449 [2024-07-25 12:00:01.499622] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:15.449 [2024-07-25 12:00:01.499675] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:15.449 [2024-07-25 12:00:01.499891] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xb61dd0 00:18:15.449 [2024-07-25 12:00:01.499903] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:15.449 [2024-07-25 12:00:01.500085] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb63080 00:18:15.449 [2024-07-25 12:00:01.500236] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb61dd0 00:18:15.449 [2024-07-25 12:00:01.500246] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb61dd0 00:18:15.449 [2024-07-25 12:00:01.500344] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.449 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.705 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.705 "name": "raid_bdev1", 00:18:15.705 "uuid": "d0bdd147-e96e-469e-ace9-62e0eab41d64", 00:18:15.705 "strip_size_kb": 64, 00:18:15.705 "state": "online", 00:18:15.705 "raid_level": "raid0", 00:18:15.705 "superblock": true, 00:18:15.705 "num_base_bdevs": 4, 00:18:15.705 "num_base_bdevs_discovered": 4, 00:18:15.705 "num_base_bdevs_operational": 4, 00:18:15.705 "base_bdevs_list": [ 00:18:15.705 { 00:18:15.705 "name": "BaseBdev1", 00:18:15.705 "uuid": "c13301a6-6e43-5a93-aca3-48d6b8d5b807", 00:18:15.705 "is_configured": true, 00:18:15.705 "data_offset": 2048, 00:18:15.705 "data_size": 63488 00:18:15.705 }, 00:18:15.705 { 00:18:15.705 "name": "BaseBdev2", 00:18:15.705 "uuid": "63c889f3-3d96-5d6d-b80f-6aaf0dd7f0e8", 00:18:15.705 "is_configured": true, 00:18:15.705 "data_offset": 2048, 00:18:15.705 "data_size": 63488 00:18:15.705 }, 00:18:15.705 { 00:18:15.705 "name": "BaseBdev3", 00:18:15.705 "uuid": "9ca12dc2-e590-5c32-8840-a179c223758d", 00:18:15.705 "is_configured": true, 00:18:15.705 "data_offset": 2048, 00:18:15.705 "data_size": 63488 00:18:15.705 }, 00:18:15.705 { 00:18:15.705 "name": "BaseBdev4", 00:18:15.705 "uuid": "9cc56d5e-ffba-5f94-a920-1586ec36d56f", 00:18:15.705 "is_configured": true, 00:18:15.705 "data_offset": 2048, 00:18:15.705 "data_size": 63488 00:18:15.705 } 00:18:15.705 ] 00:18:15.705 }' 00:18:15.705 12:00:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.705 12:00:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.269 12:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:16.269 12:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:16.269 [2024-07-25 12:00:02.272624] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9b4ef0 00:18:17.202 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.460 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.718 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.718 "name": "raid_bdev1", 00:18:17.718 "uuid": "d0bdd147-e96e-469e-ace9-62e0eab41d64", 00:18:17.718 "strip_size_kb": 64, 00:18:17.718 "state": "online", 00:18:17.718 "raid_level": "raid0", 00:18:17.718 "superblock": true, 00:18:17.718 "num_base_bdevs": 4, 00:18:17.718 "num_base_bdevs_discovered": 4, 00:18:17.718 "num_base_bdevs_operational": 4, 00:18:17.718 "base_bdevs_list": [ 00:18:17.718 { 00:18:17.718 "name": "BaseBdev1", 00:18:17.718 "uuid": "c13301a6-6e43-5a93-aca3-48d6b8d5b807", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev2", 00:18:17.718 "uuid": "63c889f3-3d96-5d6d-b80f-6aaf0dd7f0e8", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev3", 00:18:17.718 "uuid": "9ca12dc2-e590-5c32-8840-a179c223758d", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev4", 00:18:17.718 "uuid": "9cc56d5e-ffba-5f94-a920-1586ec36d56f", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 } 00:18:17.718 ] 00:18:17.718 }' 00:18:17.718 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.718 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:18.651 [2024-07-25 12:00:04.718989] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.651 [2024-07-25 12:00:04.719020] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.651 [2024-07-25 12:00:04.721936] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.651 [2024-07-25 12:00:04.721972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.651 [2024-07-25 12:00:04.722007] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.651 [2024-07-25 12:00:04.722017] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb61dd0 name raid_bdev1, state offline 00:18:18.651 0 00:18:18.651 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4180156 00:18:18.652 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4180156 ']' 00:18:18.652 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4180156 00:18:18.652 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:18.652 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.652 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4180156 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4180156' 00:18:18.910 killing process with pid 4180156 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4180156 00:18:18.910 [2024-07-25 12:00:04.775979] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4180156 00:18:18.910 [2024-07-25 12:00:04.803429] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BLEHAyKXYM 00:18:18.910 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:18:18.910 00:18:18.910 real 0m7.688s 00:18:18.910 user 0m12.333s 00:18:18.910 sys 0m1.284s 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.910 12:00:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.910 ************************************ 00:18:18.910 END TEST raid_read_error_test 00:18:18.910 ************************************ 00:18:19.168 12:00:05 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:18:19.168 12:00:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:19.168 12:00:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.168 12:00:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.168 ************************************ 00:18:19.168 START TEST raid_write_error_test 00:18:19.168 ************************************ 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.R5lR3Ffrqs 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4181988 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4181988 /var/tmp/spdk-raid.sock 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 4181988 ']' 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.168 12:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.168 [2024-07-25 12:00:05.171854] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:18:19.168 [2024-07-25 12:00:05.171916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4181988 ] 00:18:19.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.168 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:19.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.168 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:19.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.168 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:19.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.169 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:19.427 [2024-07-25 12:00:05.303155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.427 [2024-07-25 12:00:05.386567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.427 [2024-07-25 12:00:05.451835] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.427 [2024-07-25 12:00:05.451872] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.992 12:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.992 12:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:19.992 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:19.992 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:20.249 BaseBdev1_malloc 00:18:20.249 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:20.506 true 00:18:20.507 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:20.764 [2024-07-25 12:00:06.734313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:20.764 [2024-07-25 12:00:06.734356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.764 [2024-07-25 12:00:06.734372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c2190 00:18:20.764 [2024-07-25 12:00:06.734384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.764 [2024-07-25 12:00:06.735854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.764 [2024-07-25 12:00:06.735881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.764 BaseBdev1 00:18:20.764 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:20.764 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:21.022 BaseBdev2_malloc 00:18:21.022 12:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:21.280 true 00:18:21.280 12:00:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:21.539 [2024-07-25 12:00:07.424222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:21.539 [2024-07-25 12:00:07.424261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.539 [2024-07-25 12:00:07.424278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c6e20 00:18:21.539 [2024-07-25 12:00:07.424294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.539 [2024-07-25 12:00:07.425569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.539 [2024-07-25 12:00:07.425595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:21.539 BaseBdev2 00:18:21.539 12:00:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:21.539 12:00:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:21.539 BaseBdev3_malloc 00:18:21.796 12:00:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:21.796 true 00:18:21.796 12:00:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:22.053 [2024-07-25 12:00:08.106188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:22.053 [2024-07-25 12:00:08.106224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.053 [2024-07-25 12:00:08.106242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c7d90 00:18:22.053 [2024-07-25 12:00:08.106254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.053 [2024-07-25 12:00:08.107524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.053 [2024-07-25 12:00:08.107550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:22.053 BaseBdev3 00:18:22.053 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:22.053 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:22.311 BaseBdev4_malloc 00:18:22.311 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:22.568 true 00:18:22.568 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:22.826 [2024-07-25 12:00:08.784426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:22.826 [2024-07-25 12:00:08.784466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.826 [2024-07-25 12:00:08.784483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ca000 00:18:22.826 [2024-07-25 12:00:08.784495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.826 [2024-07-25 12:00:08.785747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.826 [2024-07-25 12:00:08.785772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:22.826 BaseBdev4 00:18:22.826 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:23.084 [2024-07-25 12:00:09.013056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.084 [2024-07-25 12:00:09.014111] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.084 [2024-07-25 12:00:09.014179] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.084 [2024-07-25 12:00:09.014232] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:23.084 [2024-07-25 12:00:09.014440] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x12cadd0 00:18:23.084 [2024-07-25 12:00:09.014451] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:23.084 [2024-07-25 12:00:09.014612] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12cc080 00:18:23.084 [2024-07-25 12:00:09.014742] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12cadd0 00:18:23.084 [2024-07-25 12:00:09.014751] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12cadd0 00:18:23.084 [2024-07-25 12:00:09.014839] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.084 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.342 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.342 "name": "raid_bdev1", 00:18:23.342 "uuid": "e4c339a1-39a1-422b-86e2-cb3d3f288335", 00:18:23.342 "strip_size_kb": 64, 00:18:23.342 "state": "online", 00:18:23.342 "raid_level": "raid0", 00:18:23.342 "superblock": true, 00:18:23.342 "num_base_bdevs": 4, 00:18:23.342 "num_base_bdevs_discovered": 4, 00:18:23.342 "num_base_bdevs_operational": 4, 00:18:23.342 "base_bdevs_list": [ 00:18:23.342 { 00:18:23.342 "name": "BaseBdev1", 00:18:23.342 "uuid": "1eb3061e-bd89-5944-bed1-7129d281571a", 00:18:23.342 "is_configured": true, 00:18:23.342 "data_offset": 2048, 00:18:23.342 "data_size": 63488 00:18:23.342 }, 00:18:23.342 { 00:18:23.342 "name": "BaseBdev2", 00:18:23.342 "uuid": "0ba00815-8d88-52fe-9b88-b97c9f6b1f84", 00:18:23.342 "is_configured": true, 00:18:23.342 "data_offset": 2048, 00:18:23.342 "data_size": 63488 00:18:23.342 }, 00:18:23.342 { 00:18:23.342 "name": "BaseBdev3", 00:18:23.342 "uuid": "e034909c-a3ee-5ec8-b947-249ef6843a28", 00:18:23.342 "is_configured": true, 00:18:23.342 "data_offset": 2048, 00:18:23.342 "data_size": 63488 00:18:23.342 }, 00:18:23.342 { 00:18:23.342 "name": "BaseBdev4", 00:18:23.342 "uuid": "e2f1e843-85d5-5d3d-97ff-bf28c7aff232", 00:18:23.342 "is_configured": true, 00:18:23.342 "data_offset": 2048, 00:18:23.342 "data_size": 63488 00:18:23.342 } 00:18:23.342 ] 00:18:23.342 }' 00:18:23.342 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.343 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.908 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:23.908 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:23.908 [2024-07-25 12:00:09.931729] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x111def0 00:18:24.844 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.102 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.360 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.360 "name": "raid_bdev1", 00:18:25.360 "uuid": "e4c339a1-39a1-422b-86e2-cb3d3f288335", 00:18:25.360 "strip_size_kb": 64, 00:18:25.360 "state": "online", 00:18:25.360 "raid_level": "raid0", 00:18:25.360 "superblock": true, 00:18:25.360 "num_base_bdevs": 4, 00:18:25.360 "num_base_bdevs_discovered": 4, 00:18:25.360 "num_base_bdevs_operational": 4, 00:18:25.360 "base_bdevs_list": [ 00:18:25.360 { 00:18:25.360 "name": "BaseBdev1", 00:18:25.360 "uuid": "1eb3061e-bd89-5944-bed1-7129d281571a", 00:18:25.360 "is_configured": true, 00:18:25.360 "data_offset": 2048, 00:18:25.360 "data_size": 63488 00:18:25.360 }, 00:18:25.360 { 00:18:25.360 "name": "BaseBdev2", 00:18:25.360 "uuid": "0ba00815-8d88-52fe-9b88-b97c9f6b1f84", 00:18:25.360 "is_configured": true, 00:18:25.360 "data_offset": 2048, 00:18:25.360 "data_size": 63488 00:18:25.360 }, 00:18:25.360 { 00:18:25.360 "name": "BaseBdev3", 00:18:25.360 "uuid": "e034909c-a3ee-5ec8-b947-249ef6843a28", 00:18:25.360 "is_configured": true, 00:18:25.360 "data_offset": 2048, 00:18:25.361 "data_size": 63488 00:18:25.361 }, 00:18:25.361 { 00:18:25.361 "name": "BaseBdev4", 00:18:25.361 "uuid": "e2f1e843-85d5-5d3d-97ff-bf28c7aff232", 00:18:25.361 "is_configured": true, 00:18:25.361 "data_offset": 2048, 00:18:25.361 "data_size": 63488 00:18:25.361 } 00:18:25.361 ] 00:18:25.361 }' 00:18:25.361 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.361 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:26.301 [2024-07-25 12:00:12.366081] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.301 [2024-07-25 12:00:12.366109] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.301 [2024-07-25 12:00:12.369024] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.301 [2024-07-25 12:00:12.369060] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.301 [2024-07-25 12:00:12.369097] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.301 [2024-07-25 12:00:12.369107] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12cadd0 name raid_bdev1, state offline 00:18:26.301 0 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4181988 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 4181988 ']' 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 4181988 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.301 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4181988 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4181988' 00:18:26.559 killing process with pid 4181988 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 4181988 00:18:26.559 [2024-07-25 12:00:12.442801] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 4181988 00:18:26.559 [2024-07-25 12:00:12.469724] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.R5lR3Ffrqs 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:18:26.559 00:18:26.559 real 0m7.580s 00:18:26.559 user 0m12.106s 00:18:26.559 sys 0m1.337s 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.559 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.559 ************************************ 00:18:26.559 END TEST raid_write_error_test 00:18:26.559 ************************************ 00:18:26.817 12:00:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:26.817 12:00:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:26.817 12:00:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:26.817 12:00:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.817 12:00:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.817 ************************************ 00:18:26.817 START TEST raid_state_function_test 00:18:26.817 ************************************ 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:26.817 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=4183481 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4183481' 00:18:26.818 Process raid pid: 4183481 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 4183481 /var/tmp/spdk-raid.sock 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 4183481 ']' 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.818 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.818 [2024-07-25 12:00:12.829274] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:18:26.818 [2024-07-25 12:00:12.829333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:26.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.818 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:26.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.819 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:26.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.819 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:27.075 [2024-07-25 12:00:12.962833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.075 [2024-07-25 12:00:13.045334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.075 [2024-07-25 12:00:13.108099] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.075 [2024-07-25 12:00:13.108134] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.638 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.638 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:27.638 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:27.895 [2024-07-25 12:00:13.937871] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.895 [2024-07-25 12:00:13.937909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.895 [2024-07-25 12:00:13.937919] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.895 [2024-07-25 12:00:13.937930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.895 [2024-07-25 12:00:13.937938] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.895 [2024-07-25 12:00:13.937947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.895 [2024-07-25 12:00:13.937956] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.895 [2024-07-25 12:00:13.937965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.895 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.152 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.152 "name": "Existed_Raid", 00:18:28.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.152 "strip_size_kb": 64, 00:18:28.152 "state": "configuring", 00:18:28.152 "raid_level": "concat", 00:18:28.152 "superblock": false, 00:18:28.152 "num_base_bdevs": 4, 00:18:28.152 "num_base_bdevs_discovered": 0, 00:18:28.152 "num_base_bdevs_operational": 4, 00:18:28.152 "base_bdevs_list": [ 00:18:28.152 { 00:18:28.152 "name": "BaseBdev1", 00:18:28.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.152 "is_configured": false, 00:18:28.152 "data_offset": 0, 00:18:28.152 "data_size": 0 00:18:28.152 }, 00:18:28.152 { 00:18:28.152 "name": "BaseBdev2", 00:18:28.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.152 "is_configured": false, 00:18:28.152 "data_offset": 0, 00:18:28.152 "data_size": 0 00:18:28.152 }, 00:18:28.152 { 00:18:28.152 "name": "BaseBdev3", 00:18:28.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.152 "is_configured": false, 00:18:28.152 "data_offset": 0, 00:18:28.152 "data_size": 0 00:18:28.152 }, 00:18:28.152 { 00:18:28.152 "name": "BaseBdev4", 00:18:28.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.152 "is_configured": false, 00:18:28.152 "data_offset": 0, 00:18:28.152 "data_size": 0 00:18:28.152 } 00:18:28.152 ] 00:18:28.152 }' 00:18:28.152 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.152 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.714 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:28.970 [2024-07-25 12:00:14.932361] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.970 [2024-07-25 12:00:14.932391] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x175bf60 name Existed_Raid, state configuring 00:18:28.970 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:29.227 [2024-07-25 12:00:15.156963] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:29.227 [2024-07-25 12:00:15.156989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:29.227 [2024-07-25 12:00:15.156997] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.227 [2024-07-25 12:00:15.157008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.227 [2024-07-25 12:00:15.157015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:29.227 [2024-07-25 12:00:15.157026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:29.227 [2024-07-25 12:00:15.157033] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:29.227 [2024-07-25 12:00:15.157047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:29.227 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:29.484 [2024-07-25 12:00:15.391049] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.484 BaseBdev1 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.484 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.740 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.740 [ 00:18:29.740 { 00:18:29.740 "name": "BaseBdev1", 00:18:29.740 "aliases": [ 00:18:29.740 "35a7be38-862b-4743-946c-e8291ffb891f" 00:18:29.740 ], 00:18:29.740 "product_name": "Malloc disk", 00:18:29.740 "block_size": 512, 00:18:29.740 "num_blocks": 65536, 00:18:29.740 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:29.740 "assigned_rate_limits": { 00:18:29.740 "rw_ios_per_sec": 0, 00:18:29.740 "rw_mbytes_per_sec": 0, 00:18:29.740 "r_mbytes_per_sec": 0, 00:18:29.740 "w_mbytes_per_sec": 0 00:18:29.740 }, 00:18:29.740 "claimed": true, 00:18:29.740 "claim_type": "exclusive_write", 00:18:29.740 "zoned": false, 00:18:29.740 "supported_io_types": { 00:18:29.740 "read": true, 00:18:29.740 "write": true, 00:18:29.740 "unmap": true, 00:18:29.740 "flush": true, 00:18:29.740 "reset": true, 00:18:29.740 "nvme_admin": false, 00:18:29.740 "nvme_io": false, 00:18:29.740 "nvme_io_md": false, 00:18:29.740 "write_zeroes": true, 00:18:29.740 "zcopy": true, 00:18:29.740 "get_zone_info": false, 00:18:29.740 "zone_management": false, 00:18:29.740 "zone_append": false, 00:18:29.740 "compare": false, 00:18:29.740 "compare_and_write": false, 00:18:29.740 "abort": true, 00:18:29.740 "seek_hole": false, 00:18:29.740 "seek_data": false, 00:18:29.740 "copy": true, 00:18:29.740 "nvme_iov_md": false 00:18:29.740 }, 00:18:29.740 "memory_domains": [ 00:18:29.740 { 00:18:29.740 "dma_device_id": "system", 00:18:29.740 "dma_device_type": 1 00:18:29.740 }, 00:18:29.740 { 00:18:29.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.740 "dma_device_type": 2 00:18:29.740 } 00:18:29.740 ], 00:18:29.741 "driver_specific": {} 00:18:29.741 } 00:18:29.741 ] 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.997 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.997 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.997 "name": "Existed_Raid", 00:18:29.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.997 "strip_size_kb": 64, 00:18:29.997 "state": "configuring", 00:18:29.997 "raid_level": "concat", 00:18:29.997 "superblock": false, 00:18:29.997 "num_base_bdevs": 4, 00:18:29.997 "num_base_bdevs_discovered": 1, 00:18:29.997 "num_base_bdevs_operational": 4, 00:18:29.997 "base_bdevs_list": [ 00:18:29.997 { 00:18:29.997 "name": "BaseBdev1", 00:18:29.997 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:29.997 "is_configured": true, 00:18:29.997 "data_offset": 0, 00:18:29.997 "data_size": 65536 00:18:29.997 }, 00:18:29.997 { 00:18:29.997 "name": "BaseBdev2", 00:18:29.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.997 "is_configured": false, 00:18:29.997 "data_offset": 0, 00:18:29.997 "data_size": 0 00:18:29.997 }, 00:18:29.997 { 00:18:29.997 "name": "BaseBdev3", 00:18:29.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.997 "is_configured": false, 00:18:29.997 "data_offset": 0, 00:18:29.997 "data_size": 0 00:18:29.997 }, 00:18:29.997 { 00:18:29.997 "name": "BaseBdev4", 00:18:29.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.997 "is_configured": false, 00:18:29.997 "data_offset": 0, 00:18:29.997 "data_size": 0 00:18:29.997 } 00:18:29.997 ] 00:18:29.997 }' 00:18:29.997 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.997 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.562 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.818 [2024-07-25 12:00:16.810771] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.818 [2024-07-25 12:00:16.810807] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x175b7d0 name Existed_Raid, state configuring 00:18:30.818 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:31.075 [2024-07-25 12:00:17.035413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.075 [2024-07-25 12:00:17.036801] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.075 [2024-07-25 12:00:17.036834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.075 [2024-07-25 12:00:17.036844] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.075 [2024-07-25 12:00:17.036855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.075 [2024-07-25 12:00:17.036863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:31.075 [2024-07-25 12:00:17.036873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.075 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.332 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.332 "name": "Existed_Raid", 00:18:31.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.332 "strip_size_kb": 64, 00:18:31.332 "state": "configuring", 00:18:31.332 "raid_level": "concat", 00:18:31.332 "superblock": false, 00:18:31.332 "num_base_bdevs": 4, 00:18:31.332 "num_base_bdevs_discovered": 1, 00:18:31.332 "num_base_bdevs_operational": 4, 00:18:31.332 "base_bdevs_list": [ 00:18:31.332 { 00:18:31.332 "name": "BaseBdev1", 00:18:31.332 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:31.332 "is_configured": true, 00:18:31.332 "data_offset": 0, 00:18:31.332 "data_size": 65536 00:18:31.332 }, 00:18:31.332 { 00:18:31.332 "name": "BaseBdev2", 00:18:31.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.332 "is_configured": false, 00:18:31.332 "data_offset": 0, 00:18:31.332 "data_size": 0 00:18:31.332 }, 00:18:31.332 { 00:18:31.332 "name": "BaseBdev3", 00:18:31.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.332 "is_configured": false, 00:18:31.332 "data_offset": 0, 00:18:31.332 "data_size": 0 00:18:31.332 }, 00:18:31.332 { 00:18:31.332 "name": "BaseBdev4", 00:18:31.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.332 "is_configured": false, 00:18:31.332 "data_offset": 0, 00:18:31.332 "data_size": 0 00:18:31.332 } 00:18:31.332 ] 00:18:31.332 }' 00:18:31.332 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.332 12:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.895 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:32.151 [2024-07-25 12:00:18.061246] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.151 BaseBdev2 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:32.151 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.408 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.408 [ 00:18:32.408 { 00:18:32.408 "name": "BaseBdev2", 00:18:32.408 "aliases": [ 00:18:32.408 "cfe916ba-7dbf-4b1d-8688-39fc18a4136d" 00:18:32.408 ], 00:18:32.408 "product_name": "Malloc disk", 00:18:32.408 "block_size": 512, 00:18:32.408 "num_blocks": 65536, 00:18:32.408 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:32.408 "assigned_rate_limits": { 00:18:32.408 "rw_ios_per_sec": 0, 00:18:32.408 "rw_mbytes_per_sec": 0, 00:18:32.408 "r_mbytes_per_sec": 0, 00:18:32.408 "w_mbytes_per_sec": 0 00:18:32.408 }, 00:18:32.408 "claimed": true, 00:18:32.408 "claim_type": "exclusive_write", 00:18:32.408 "zoned": false, 00:18:32.408 "supported_io_types": { 00:18:32.408 "read": true, 00:18:32.408 "write": true, 00:18:32.408 "unmap": true, 00:18:32.408 "flush": true, 00:18:32.408 "reset": true, 00:18:32.408 "nvme_admin": false, 00:18:32.408 "nvme_io": false, 00:18:32.408 "nvme_io_md": false, 00:18:32.408 "write_zeroes": true, 00:18:32.408 "zcopy": true, 00:18:32.408 "get_zone_info": false, 00:18:32.408 "zone_management": false, 00:18:32.408 "zone_append": false, 00:18:32.408 "compare": false, 00:18:32.408 "compare_and_write": false, 00:18:32.408 "abort": true, 00:18:32.408 "seek_hole": false, 00:18:32.408 "seek_data": false, 00:18:32.408 "copy": true, 00:18:32.408 "nvme_iov_md": false 00:18:32.408 }, 00:18:32.408 "memory_domains": [ 00:18:32.408 { 00:18:32.408 "dma_device_id": "system", 00:18:32.408 "dma_device_type": 1 00:18:32.408 }, 00:18:32.408 { 00:18:32.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.408 "dma_device_type": 2 00:18:32.408 } 00:18:32.408 ], 00:18:32.408 "driver_specific": {} 00:18:32.408 } 00:18:32.408 ] 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.664 "name": "Existed_Raid", 00:18:32.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.664 "strip_size_kb": 64, 00:18:32.664 "state": "configuring", 00:18:32.664 "raid_level": "concat", 00:18:32.664 "superblock": false, 00:18:32.664 "num_base_bdevs": 4, 00:18:32.664 "num_base_bdevs_discovered": 2, 00:18:32.664 "num_base_bdevs_operational": 4, 00:18:32.664 "base_bdevs_list": [ 00:18:32.664 { 00:18:32.664 "name": "BaseBdev1", 00:18:32.664 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:32.664 "is_configured": true, 00:18:32.664 "data_offset": 0, 00:18:32.664 "data_size": 65536 00:18:32.664 }, 00:18:32.664 { 00:18:32.664 "name": "BaseBdev2", 00:18:32.664 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:32.664 "is_configured": true, 00:18:32.664 "data_offset": 0, 00:18:32.664 "data_size": 65536 00:18:32.664 }, 00:18:32.664 { 00:18:32.664 "name": "BaseBdev3", 00:18:32.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.664 "is_configured": false, 00:18:32.664 "data_offset": 0, 00:18:32.664 "data_size": 0 00:18:32.664 }, 00:18:32.664 { 00:18:32.664 "name": "BaseBdev4", 00:18:32.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.664 "is_configured": false, 00:18:32.664 "data_offset": 0, 00:18:32.664 "data_size": 0 00:18:32.664 } 00:18:32.664 ] 00:18:32.664 }' 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.664 12:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.229 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:33.486 [2024-07-25 12:00:19.536439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.486 BaseBdev3 00:18:33.486 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:33.486 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:33.486 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:33.486 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:33.486 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:33.487 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:33.487 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.744 12:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:34.001 [ 00:18:34.001 { 00:18:34.001 "name": "BaseBdev3", 00:18:34.001 "aliases": [ 00:18:34.001 "ed0612b9-de26-4715-a9c5-23efb92fde8c" 00:18:34.001 ], 00:18:34.001 "product_name": "Malloc disk", 00:18:34.001 "block_size": 512, 00:18:34.001 "num_blocks": 65536, 00:18:34.001 "uuid": "ed0612b9-de26-4715-a9c5-23efb92fde8c", 00:18:34.001 "assigned_rate_limits": { 00:18:34.001 "rw_ios_per_sec": 0, 00:18:34.001 "rw_mbytes_per_sec": 0, 00:18:34.001 "r_mbytes_per_sec": 0, 00:18:34.001 "w_mbytes_per_sec": 0 00:18:34.001 }, 00:18:34.001 "claimed": true, 00:18:34.001 "claim_type": "exclusive_write", 00:18:34.001 "zoned": false, 00:18:34.001 "supported_io_types": { 00:18:34.001 "read": true, 00:18:34.001 "write": true, 00:18:34.001 "unmap": true, 00:18:34.001 "flush": true, 00:18:34.001 "reset": true, 00:18:34.001 "nvme_admin": false, 00:18:34.001 "nvme_io": false, 00:18:34.001 "nvme_io_md": false, 00:18:34.001 "write_zeroes": true, 00:18:34.001 "zcopy": true, 00:18:34.001 "get_zone_info": false, 00:18:34.001 "zone_management": false, 00:18:34.001 "zone_append": false, 00:18:34.001 "compare": false, 00:18:34.001 "compare_and_write": false, 00:18:34.001 "abort": true, 00:18:34.001 "seek_hole": false, 00:18:34.001 "seek_data": false, 00:18:34.001 "copy": true, 00:18:34.001 "nvme_iov_md": false 00:18:34.001 }, 00:18:34.001 "memory_domains": [ 00:18:34.001 { 00:18:34.001 "dma_device_id": "system", 00:18:34.001 "dma_device_type": 1 00:18:34.001 }, 00:18:34.001 { 00:18:34.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.001 "dma_device_type": 2 00:18:34.001 } 00:18:34.001 ], 00:18:34.001 "driver_specific": {} 00:18:34.001 } 00:18:34.001 ] 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.001 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.259 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.259 "name": "Existed_Raid", 00:18:34.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.259 "strip_size_kb": 64, 00:18:34.259 "state": "configuring", 00:18:34.259 "raid_level": "concat", 00:18:34.259 "superblock": false, 00:18:34.259 "num_base_bdevs": 4, 00:18:34.259 "num_base_bdevs_discovered": 3, 00:18:34.259 "num_base_bdevs_operational": 4, 00:18:34.259 "base_bdevs_list": [ 00:18:34.259 { 00:18:34.259 "name": "BaseBdev1", 00:18:34.259 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:34.259 "is_configured": true, 00:18:34.259 "data_offset": 0, 00:18:34.259 "data_size": 65536 00:18:34.259 }, 00:18:34.259 { 00:18:34.259 "name": "BaseBdev2", 00:18:34.259 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:34.259 "is_configured": true, 00:18:34.259 "data_offset": 0, 00:18:34.259 "data_size": 65536 00:18:34.259 }, 00:18:34.259 { 00:18:34.259 "name": "BaseBdev3", 00:18:34.259 "uuid": "ed0612b9-de26-4715-a9c5-23efb92fde8c", 00:18:34.259 "is_configured": true, 00:18:34.259 "data_offset": 0, 00:18:34.259 "data_size": 65536 00:18:34.259 }, 00:18:34.259 { 00:18:34.259 "name": "BaseBdev4", 00:18:34.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.259 "is_configured": false, 00:18:34.259 "data_offset": 0, 00:18:34.259 "data_size": 0 00:18:34.259 } 00:18:34.259 ] 00:18:34.259 }' 00:18:34.259 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.259 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.821 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:35.077 [2024-07-25 12:00:21.015541] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:35.077 [2024-07-25 12:00:21.015576] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x175c830 00:18:35.077 [2024-07-25 12:00:21.015584] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:35.077 [2024-07-25 12:00:21.015776] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1755160 00:18:35.077 [2024-07-25 12:00:21.015892] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x175c830 00:18:35.077 [2024-07-25 12:00:21.015901] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x175c830 00:18:35.077 [2024-07-25 12:00:21.016055] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.077 BaseBdev4 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:35.077 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.334 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:35.590 [ 00:18:35.590 { 00:18:35.590 "name": "BaseBdev4", 00:18:35.590 "aliases": [ 00:18:35.590 "b8a08035-13cf-4698-8e84-2b5f7eb281eb" 00:18:35.590 ], 00:18:35.590 "product_name": "Malloc disk", 00:18:35.590 "block_size": 512, 00:18:35.590 "num_blocks": 65536, 00:18:35.590 "uuid": "b8a08035-13cf-4698-8e84-2b5f7eb281eb", 00:18:35.590 "assigned_rate_limits": { 00:18:35.590 "rw_ios_per_sec": 0, 00:18:35.590 "rw_mbytes_per_sec": 0, 00:18:35.590 "r_mbytes_per_sec": 0, 00:18:35.590 "w_mbytes_per_sec": 0 00:18:35.590 }, 00:18:35.590 "claimed": true, 00:18:35.590 "claim_type": "exclusive_write", 00:18:35.590 "zoned": false, 00:18:35.590 "supported_io_types": { 00:18:35.590 "read": true, 00:18:35.590 "write": true, 00:18:35.590 "unmap": true, 00:18:35.590 "flush": true, 00:18:35.590 "reset": true, 00:18:35.590 "nvme_admin": false, 00:18:35.590 "nvme_io": false, 00:18:35.590 "nvme_io_md": false, 00:18:35.590 "write_zeroes": true, 00:18:35.590 "zcopy": true, 00:18:35.590 "get_zone_info": false, 00:18:35.590 "zone_management": false, 00:18:35.590 "zone_append": false, 00:18:35.590 "compare": false, 00:18:35.590 "compare_and_write": false, 00:18:35.590 "abort": true, 00:18:35.590 "seek_hole": false, 00:18:35.590 "seek_data": false, 00:18:35.590 "copy": true, 00:18:35.590 "nvme_iov_md": false 00:18:35.590 }, 00:18:35.590 "memory_domains": [ 00:18:35.590 { 00:18:35.590 "dma_device_id": "system", 00:18:35.590 "dma_device_type": 1 00:18:35.590 }, 00:18:35.590 { 00:18:35.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.590 "dma_device_type": 2 00:18:35.590 } 00:18:35.590 ], 00:18:35.590 "driver_specific": {} 00:18:35.590 } 00:18:35.590 ] 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.590 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.847 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.847 "name": "Existed_Raid", 00:18:35.847 "uuid": "d1fa267b-1223-4436-9b13-8d19edc3d851", 00:18:35.847 "strip_size_kb": 64, 00:18:35.847 "state": "online", 00:18:35.847 "raid_level": "concat", 00:18:35.847 "superblock": false, 00:18:35.847 "num_base_bdevs": 4, 00:18:35.847 "num_base_bdevs_discovered": 4, 00:18:35.847 "num_base_bdevs_operational": 4, 00:18:35.847 "base_bdevs_list": [ 00:18:35.847 { 00:18:35.847 "name": "BaseBdev1", 00:18:35.847 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:35.847 "is_configured": true, 00:18:35.847 "data_offset": 0, 00:18:35.847 "data_size": 65536 00:18:35.847 }, 00:18:35.847 { 00:18:35.847 "name": "BaseBdev2", 00:18:35.847 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:35.847 "is_configured": true, 00:18:35.847 "data_offset": 0, 00:18:35.847 "data_size": 65536 00:18:35.848 }, 00:18:35.848 { 00:18:35.848 "name": "BaseBdev3", 00:18:35.848 "uuid": "ed0612b9-de26-4715-a9c5-23efb92fde8c", 00:18:35.848 "is_configured": true, 00:18:35.848 "data_offset": 0, 00:18:35.848 "data_size": 65536 00:18:35.848 }, 00:18:35.848 { 00:18:35.848 "name": "BaseBdev4", 00:18:35.848 "uuid": "b8a08035-13cf-4698-8e84-2b5f7eb281eb", 00:18:35.848 "is_configured": true, 00:18:35.848 "data_offset": 0, 00:18:35.848 "data_size": 65536 00:18:35.848 } 00:18:35.848 ] 00:18:35.848 }' 00:18:35.848 12:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.848 12:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:36.411 [2024-07-25 12:00:22.507770] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.411 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:36.411 "name": "Existed_Raid", 00:18:36.411 "aliases": [ 00:18:36.411 "d1fa267b-1223-4436-9b13-8d19edc3d851" 00:18:36.411 ], 00:18:36.411 "product_name": "Raid Volume", 00:18:36.411 "block_size": 512, 00:18:36.411 "num_blocks": 262144, 00:18:36.411 "uuid": "d1fa267b-1223-4436-9b13-8d19edc3d851", 00:18:36.411 "assigned_rate_limits": { 00:18:36.411 "rw_ios_per_sec": 0, 00:18:36.411 "rw_mbytes_per_sec": 0, 00:18:36.411 "r_mbytes_per_sec": 0, 00:18:36.411 "w_mbytes_per_sec": 0 00:18:36.411 }, 00:18:36.411 "claimed": false, 00:18:36.411 "zoned": false, 00:18:36.411 "supported_io_types": { 00:18:36.411 "read": true, 00:18:36.411 "write": true, 00:18:36.411 "unmap": true, 00:18:36.411 "flush": true, 00:18:36.411 "reset": true, 00:18:36.411 "nvme_admin": false, 00:18:36.411 "nvme_io": false, 00:18:36.411 "nvme_io_md": false, 00:18:36.411 "write_zeroes": true, 00:18:36.411 "zcopy": false, 00:18:36.411 "get_zone_info": false, 00:18:36.411 "zone_management": false, 00:18:36.411 "zone_append": false, 00:18:36.411 "compare": false, 00:18:36.411 "compare_and_write": false, 00:18:36.411 "abort": false, 00:18:36.411 "seek_hole": false, 00:18:36.411 "seek_data": false, 00:18:36.411 "copy": false, 00:18:36.411 "nvme_iov_md": false 00:18:36.411 }, 00:18:36.411 "memory_domains": [ 00:18:36.411 { 00:18:36.411 "dma_device_id": "system", 00:18:36.411 "dma_device_type": 1 00:18:36.411 }, 00:18:36.411 { 00:18:36.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.411 "dma_device_type": 2 00:18:36.411 }, 00:18:36.411 { 00:18:36.411 "dma_device_id": "system", 00:18:36.411 "dma_device_type": 1 00:18:36.411 }, 00:18:36.411 { 00:18:36.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.411 "dma_device_type": 2 00:18:36.411 }, 00:18:36.412 { 00:18:36.412 "dma_device_id": "system", 00:18:36.412 "dma_device_type": 1 00:18:36.412 }, 00:18:36.412 { 00:18:36.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.412 "dma_device_type": 2 00:18:36.412 }, 00:18:36.412 { 00:18:36.412 "dma_device_id": "system", 00:18:36.412 "dma_device_type": 1 00:18:36.412 }, 00:18:36.412 { 00:18:36.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.412 "dma_device_type": 2 00:18:36.412 } 00:18:36.412 ], 00:18:36.412 "driver_specific": { 00:18:36.412 "raid": { 00:18:36.412 "uuid": "d1fa267b-1223-4436-9b13-8d19edc3d851", 00:18:36.412 "strip_size_kb": 64, 00:18:36.412 "state": "online", 00:18:36.412 "raid_level": "concat", 00:18:36.412 "superblock": false, 00:18:36.412 "num_base_bdevs": 4, 00:18:36.412 "num_base_bdevs_discovered": 4, 00:18:36.412 "num_base_bdevs_operational": 4, 00:18:36.412 "base_bdevs_list": [ 00:18:36.412 { 00:18:36.412 "name": "BaseBdev1", 00:18:36.412 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:36.412 "is_configured": true, 00:18:36.412 "data_offset": 0, 00:18:36.412 "data_size": 65536 00:18:36.412 }, 00:18:36.412 { 00:18:36.412 "name": "BaseBdev2", 00:18:36.412 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:36.412 "is_configured": true, 00:18:36.412 "data_offset": 0, 00:18:36.412 "data_size": 65536 00:18:36.412 }, 00:18:36.412 { 00:18:36.412 "name": "BaseBdev3", 00:18:36.412 "uuid": "ed0612b9-de26-4715-a9c5-23efb92fde8c", 00:18:36.412 "is_configured": true, 00:18:36.412 "data_offset": 0, 00:18:36.412 "data_size": 65536 00:18:36.412 }, 00:18:36.412 { 00:18:36.412 "name": "BaseBdev4", 00:18:36.412 "uuid": "b8a08035-13cf-4698-8e84-2b5f7eb281eb", 00:18:36.412 "is_configured": true, 00:18:36.412 "data_offset": 0, 00:18:36.412 "data_size": 65536 00:18:36.412 } 00:18:36.412 ] 00:18:36.412 } 00:18:36.412 } 00:18:36.412 }' 00:18:36.412 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.668 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:36.668 BaseBdev2 00:18:36.668 BaseBdev3 00:18:36.668 BaseBdev4' 00:18:36.668 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.668 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:36.668 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.925 "name": "BaseBdev1", 00:18:36.925 "aliases": [ 00:18:36.925 "35a7be38-862b-4743-946c-e8291ffb891f" 00:18:36.925 ], 00:18:36.925 "product_name": "Malloc disk", 00:18:36.925 "block_size": 512, 00:18:36.925 "num_blocks": 65536, 00:18:36.925 "uuid": "35a7be38-862b-4743-946c-e8291ffb891f", 00:18:36.925 "assigned_rate_limits": { 00:18:36.925 "rw_ios_per_sec": 0, 00:18:36.925 "rw_mbytes_per_sec": 0, 00:18:36.925 "r_mbytes_per_sec": 0, 00:18:36.925 "w_mbytes_per_sec": 0 00:18:36.925 }, 00:18:36.925 "claimed": true, 00:18:36.925 "claim_type": "exclusive_write", 00:18:36.925 "zoned": false, 00:18:36.925 "supported_io_types": { 00:18:36.925 "read": true, 00:18:36.925 "write": true, 00:18:36.925 "unmap": true, 00:18:36.925 "flush": true, 00:18:36.925 "reset": true, 00:18:36.925 "nvme_admin": false, 00:18:36.925 "nvme_io": false, 00:18:36.925 "nvme_io_md": false, 00:18:36.925 "write_zeroes": true, 00:18:36.925 "zcopy": true, 00:18:36.925 "get_zone_info": false, 00:18:36.925 "zone_management": false, 00:18:36.925 "zone_append": false, 00:18:36.925 "compare": false, 00:18:36.925 "compare_and_write": false, 00:18:36.925 "abort": true, 00:18:36.925 "seek_hole": false, 00:18:36.925 "seek_data": false, 00:18:36.925 "copy": true, 00:18:36.925 "nvme_iov_md": false 00:18:36.925 }, 00:18:36.925 "memory_domains": [ 00:18:36.925 { 00:18:36.925 "dma_device_id": "system", 00:18:36.925 "dma_device_type": 1 00:18:36.925 }, 00:18:36.925 { 00:18:36.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.925 "dma_device_type": 2 00:18:36.925 } 00:18:36.925 ], 00:18:36.925 "driver_specific": {} 00:18:36.925 }' 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:36.925 12:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.925 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.925 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.925 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.181 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.181 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.181 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.181 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:37.181 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.449 "name": "BaseBdev2", 00:18:37.449 "aliases": [ 00:18:37.449 "cfe916ba-7dbf-4b1d-8688-39fc18a4136d" 00:18:37.449 ], 00:18:37.449 "product_name": "Malloc disk", 00:18:37.449 "block_size": 512, 00:18:37.449 "num_blocks": 65536, 00:18:37.449 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:37.449 "assigned_rate_limits": { 00:18:37.449 "rw_ios_per_sec": 0, 00:18:37.449 "rw_mbytes_per_sec": 0, 00:18:37.449 "r_mbytes_per_sec": 0, 00:18:37.449 "w_mbytes_per_sec": 0 00:18:37.449 }, 00:18:37.449 "claimed": true, 00:18:37.449 "claim_type": "exclusive_write", 00:18:37.449 "zoned": false, 00:18:37.449 "supported_io_types": { 00:18:37.449 "read": true, 00:18:37.449 "write": true, 00:18:37.449 "unmap": true, 00:18:37.449 "flush": true, 00:18:37.449 "reset": true, 00:18:37.449 "nvme_admin": false, 00:18:37.449 "nvme_io": false, 00:18:37.449 "nvme_io_md": false, 00:18:37.449 "write_zeroes": true, 00:18:37.449 "zcopy": true, 00:18:37.449 "get_zone_info": false, 00:18:37.449 "zone_management": false, 00:18:37.449 "zone_append": false, 00:18:37.449 "compare": false, 00:18:37.449 "compare_and_write": false, 00:18:37.449 "abort": true, 00:18:37.449 "seek_hole": false, 00:18:37.449 "seek_data": false, 00:18:37.449 "copy": true, 00:18:37.449 "nvme_iov_md": false 00:18:37.449 }, 00:18:37.449 "memory_domains": [ 00:18:37.449 { 00:18:37.449 "dma_device_id": "system", 00:18:37.449 "dma_device_type": 1 00:18:37.449 }, 00:18:37.449 { 00:18:37.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.449 "dma_device_type": 2 00:18:37.449 } 00:18:37.449 ], 00:18:37.449 "driver_specific": {} 00:18:37.449 }' 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.449 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:37.705 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.962 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.962 "name": "BaseBdev3", 00:18:37.962 "aliases": [ 00:18:37.962 "ed0612b9-de26-4715-a9c5-23efb92fde8c" 00:18:37.962 ], 00:18:37.962 "product_name": "Malloc disk", 00:18:37.962 "block_size": 512, 00:18:37.962 "num_blocks": 65536, 00:18:37.962 "uuid": "ed0612b9-de26-4715-a9c5-23efb92fde8c", 00:18:37.962 "assigned_rate_limits": { 00:18:37.962 "rw_ios_per_sec": 0, 00:18:37.962 "rw_mbytes_per_sec": 0, 00:18:37.962 "r_mbytes_per_sec": 0, 00:18:37.962 "w_mbytes_per_sec": 0 00:18:37.962 }, 00:18:37.962 "claimed": true, 00:18:37.962 "claim_type": "exclusive_write", 00:18:37.962 "zoned": false, 00:18:37.962 "supported_io_types": { 00:18:37.962 "read": true, 00:18:37.962 "write": true, 00:18:37.962 "unmap": true, 00:18:37.962 "flush": true, 00:18:37.962 "reset": true, 00:18:37.962 "nvme_admin": false, 00:18:37.962 "nvme_io": false, 00:18:37.962 "nvme_io_md": false, 00:18:37.962 "write_zeroes": true, 00:18:37.962 "zcopy": true, 00:18:37.962 "get_zone_info": false, 00:18:37.962 "zone_management": false, 00:18:37.962 "zone_append": false, 00:18:37.962 "compare": false, 00:18:37.962 "compare_and_write": false, 00:18:37.962 "abort": true, 00:18:37.962 "seek_hole": false, 00:18:37.962 "seek_data": false, 00:18:37.962 "copy": true, 00:18:37.962 "nvme_iov_md": false 00:18:37.962 }, 00:18:37.962 "memory_domains": [ 00:18:37.962 { 00:18:37.962 "dma_device_id": "system", 00:18:37.962 "dma_device_type": 1 00:18:37.962 }, 00:18:37.962 { 00:18:37.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.962 "dma_device_type": 2 00:18:37.962 } 00:18:37.962 ], 00:18:37.962 "driver_specific": {} 00:18:37.962 }' 00:18:37.962 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.962 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.962 12:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.962 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.962 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.962 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.962 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:38.218 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.502 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:38.502 "name": "BaseBdev4", 00:18:38.502 "aliases": [ 00:18:38.502 "b8a08035-13cf-4698-8e84-2b5f7eb281eb" 00:18:38.502 ], 00:18:38.502 "product_name": "Malloc disk", 00:18:38.502 "block_size": 512, 00:18:38.502 "num_blocks": 65536, 00:18:38.502 "uuid": "b8a08035-13cf-4698-8e84-2b5f7eb281eb", 00:18:38.502 "assigned_rate_limits": { 00:18:38.502 "rw_ios_per_sec": 0, 00:18:38.502 "rw_mbytes_per_sec": 0, 00:18:38.502 "r_mbytes_per_sec": 0, 00:18:38.502 "w_mbytes_per_sec": 0 00:18:38.502 }, 00:18:38.502 "claimed": true, 00:18:38.502 "claim_type": "exclusive_write", 00:18:38.502 "zoned": false, 00:18:38.502 "supported_io_types": { 00:18:38.502 "read": true, 00:18:38.502 "write": true, 00:18:38.502 "unmap": true, 00:18:38.502 "flush": true, 00:18:38.502 "reset": true, 00:18:38.502 "nvme_admin": false, 00:18:38.502 "nvme_io": false, 00:18:38.502 "nvme_io_md": false, 00:18:38.502 "write_zeroes": true, 00:18:38.502 "zcopy": true, 00:18:38.502 "get_zone_info": false, 00:18:38.502 "zone_management": false, 00:18:38.502 "zone_append": false, 00:18:38.502 "compare": false, 00:18:38.502 "compare_and_write": false, 00:18:38.502 "abort": true, 00:18:38.503 "seek_hole": false, 00:18:38.503 "seek_data": false, 00:18:38.503 "copy": true, 00:18:38.503 "nvme_iov_md": false 00:18:38.503 }, 00:18:38.503 "memory_domains": [ 00:18:38.503 { 00:18:38.503 "dma_device_id": "system", 00:18:38.503 "dma_device_type": 1 00:18:38.503 }, 00:18:38.503 { 00:18:38.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.503 "dma_device_type": 2 00:18:38.503 } 00:18:38.503 ], 00:18:38.503 "driver_specific": {} 00:18:38.503 }' 00:18:38.503 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.503 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.503 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:38.503 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.759 12:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:39.016 [2024-07-25 12:00:25.026346] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.016 [2024-07-25 12:00:25.026371] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.016 [2024-07-25 12:00:25.026415] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.016 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.274 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.274 "name": "Existed_Raid", 00:18:39.274 "uuid": "d1fa267b-1223-4436-9b13-8d19edc3d851", 00:18:39.274 "strip_size_kb": 64, 00:18:39.274 "state": "offline", 00:18:39.274 "raid_level": "concat", 00:18:39.274 "superblock": false, 00:18:39.274 "num_base_bdevs": 4, 00:18:39.274 "num_base_bdevs_discovered": 3, 00:18:39.274 "num_base_bdevs_operational": 3, 00:18:39.274 "base_bdevs_list": [ 00:18:39.274 { 00:18:39.274 "name": null, 00:18:39.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.274 "is_configured": false, 00:18:39.274 "data_offset": 0, 00:18:39.274 "data_size": 65536 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "name": "BaseBdev2", 00:18:39.274 "uuid": "cfe916ba-7dbf-4b1d-8688-39fc18a4136d", 00:18:39.274 "is_configured": true, 00:18:39.274 "data_offset": 0, 00:18:39.274 "data_size": 65536 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "name": "BaseBdev3", 00:18:39.274 "uuid": "ed0612b9-de26-4715-a9c5-23efb92fde8c", 00:18:39.274 "is_configured": true, 00:18:39.274 "data_offset": 0, 00:18:39.274 "data_size": 65536 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "name": "BaseBdev4", 00:18:39.274 "uuid": "b8a08035-13cf-4698-8e84-2b5f7eb281eb", 00:18:39.274 "is_configured": true, 00:18:39.274 "data_offset": 0, 00:18:39.274 "data_size": 65536 00:18:39.274 } 00:18:39.274 ] 00:18:39.274 }' 00:18:39.274 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.274 12:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.836 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:39.836 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:39.836 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:39.836 12:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.119 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:40.119 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.119 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:40.375 [2024-07-25 12:00:26.270668] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:40.375 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:40.375 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:40.375 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.375 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:40.632 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:40.632 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.632 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:40.632 [2024-07-25 12:00:26.737744] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.888 12:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:41.144 [2024-07-25 12:00:27.201134] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:41.144 [2024-07-25 12:00:27.201180] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x175c830 name Existed_Raid, state offline 00:18:41.144 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:41.144 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:41.144 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.144 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:41.401 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:41.401 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:41.401 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:41.401 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:41.401 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:41.401 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:41.658 BaseBdev2 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:41.658 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.913 12:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.169 [ 00:18:42.169 { 00:18:42.169 "name": "BaseBdev2", 00:18:42.169 "aliases": [ 00:18:42.169 "d29cbc25-521f-4035-a9d2-3f2e7f4912d5" 00:18:42.169 ], 00:18:42.169 "product_name": "Malloc disk", 00:18:42.169 "block_size": 512, 00:18:42.169 "num_blocks": 65536, 00:18:42.169 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:42.169 "assigned_rate_limits": { 00:18:42.169 "rw_ios_per_sec": 0, 00:18:42.169 "rw_mbytes_per_sec": 0, 00:18:42.169 "r_mbytes_per_sec": 0, 00:18:42.169 "w_mbytes_per_sec": 0 00:18:42.169 }, 00:18:42.169 "claimed": false, 00:18:42.169 "zoned": false, 00:18:42.169 "supported_io_types": { 00:18:42.169 "read": true, 00:18:42.169 "write": true, 00:18:42.169 "unmap": true, 00:18:42.169 "flush": true, 00:18:42.169 "reset": true, 00:18:42.169 "nvme_admin": false, 00:18:42.169 "nvme_io": false, 00:18:42.169 "nvme_io_md": false, 00:18:42.169 "write_zeroes": true, 00:18:42.169 "zcopy": true, 00:18:42.169 "get_zone_info": false, 00:18:42.169 "zone_management": false, 00:18:42.169 "zone_append": false, 00:18:42.169 "compare": false, 00:18:42.169 "compare_and_write": false, 00:18:42.169 "abort": true, 00:18:42.169 "seek_hole": false, 00:18:42.169 "seek_data": false, 00:18:42.169 "copy": true, 00:18:42.169 "nvme_iov_md": false 00:18:42.169 }, 00:18:42.169 "memory_domains": [ 00:18:42.169 { 00:18:42.169 "dma_device_id": "system", 00:18:42.169 "dma_device_type": 1 00:18:42.169 }, 00:18:42.169 { 00:18:42.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.169 "dma_device_type": 2 00:18:42.169 } 00:18:42.169 ], 00:18:42.169 "driver_specific": {} 00:18:42.169 } 00:18:42.169 ] 00:18:42.169 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:42.169 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:42.169 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:42.169 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.426 BaseBdev3 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.426 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.682 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:42.939 [ 00:18:42.939 { 00:18:42.939 "name": "BaseBdev3", 00:18:42.939 "aliases": [ 00:18:42.939 "5f6ea788-8edd-4afd-a705-dfee5286d1a8" 00:18:42.939 ], 00:18:42.939 "product_name": "Malloc disk", 00:18:42.939 "block_size": 512, 00:18:42.939 "num_blocks": 65536, 00:18:42.939 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:42.939 "assigned_rate_limits": { 00:18:42.939 "rw_ios_per_sec": 0, 00:18:42.939 "rw_mbytes_per_sec": 0, 00:18:42.939 "r_mbytes_per_sec": 0, 00:18:42.939 "w_mbytes_per_sec": 0 00:18:42.939 }, 00:18:42.939 "claimed": false, 00:18:42.939 "zoned": false, 00:18:42.939 "supported_io_types": { 00:18:42.939 "read": true, 00:18:42.939 "write": true, 00:18:42.939 "unmap": true, 00:18:42.939 "flush": true, 00:18:42.939 "reset": true, 00:18:42.939 "nvme_admin": false, 00:18:42.939 "nvme_io": false, 00:18:42.939 "nvme_io_md": false, 00:18:42.939 "write_zeroes": true, 00:18:42.939 "zcopy": true, 00:18:42.939 "get_zone_info": false, 00:18:42.939 "zone_management": false, 00:18:42.939 "zone_append": false, 00:18:42.939 "compare": false, 00:18:42.939 "compare_and_write": false, 00:18:42.939 "abort": true, 00:18:42.939 "seek_hole": false, 00:18:42.939 "seek_data": false, 00:18:42.939 "copy": true, 00:18:42.939 "nvme_iov_md": false 00:18:42.939 }, 00:18:42.939 "memory_domains": [ 00:18:42.939 { 00:18:42.939 "dma_device_id": "system", 00:18:42.939 "dma_device_type": 1 00:18:42.939 }, 00:18:42.939 { 00:18:42.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.939 "dma_device_type": 2 00:18:42.939 } 00:18:42.939 ], 00:18:42.939 "driver_specific": {} 00:18:42.939 } 00:18:42.939 ] 00:18:42.939 12:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:42.939 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:42.939 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:42.939 12:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.939 BaseBdev4 00:18:42.939 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:42.939 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:43.195 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.195 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:43.195 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.195 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.195 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.195 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.452 [ 00:18:43.452 { 00:18:43.452 "name": "BaseBdev4", 00:18:43.452 "aliases": [ 00:18:43.452 "008d249a-266a-4434-9f2e-803562311153" 00:18:43.452 ], 00:18:43.452 "product_name": "Malloc disk", 00:18:43.452 "block_size": 512, 00:18:43.452 "num_blocks": 65536, 00:18:43.452 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:43.452 "assigned_rate_limits": { 00:18:43.452 "rw_ios_per_sec": 0, 00:18:43.452 "rw_mbytes_per_sec": 0, 00:18:43.452 "r_mbytes_per_sec": 0, 00:18:43.452 "w_mbytes_per_sec": 0 00:18:43.452 }, 00:18:43.452 "claimed": false, 00:18:43.452 "zoned": false, 00:18:43.452 "supported_io_types": { 00:18:43.452 "read": true, 00:18:43.452 "write": true, 00:18:43.452 "unmap": true, 00:18:43.452 "flush": true, 00:18:43.452 "reset": true, 00:18:43.452 "nvme_admin": false, 00:18:43.452 "nvme_io": false, 00:18:43.452 "nvme_io_md": false, 00:18:43.452 "write_zeroes": true, 00:18:43.452 "zcopy": true, 00:18:43.452 "get_zone_info": false, 00:18:43.452 "zone_management": false, 00:18:43.452 "zone_append": false, 00:18:43.452 "compare": false, 00:18:43.452 "compare_and_write": false, 00:18:43.452 "abort": true, 00:18:43.452 "seek_hole": false, 00:18:43.452 "seek_data": false, 00:18:43.452 "copy": true, 00:18:43.452 "nvme_iov_md": false 00:18:43.452 }, 00:18:43.452 "memory_domains": [ 00:18:43.452 { 00:18:43.452 "dma_device_id": "system", 00:18:43.452 "dma_device_type": 1 00:18:43.452 }, 00:18:43.452 { 00:18:43.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.452 "dma_device_type": 2 00:18:43.452 } 00:18:43.452 ], 00:18:43.452 "driver_specific": {} 00:18:43.452 } 00:18:43.452 ] 00:18:43.452 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:43.452 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:43.452 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:43.452 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:43.707 [2024-07-25 12:00:29.714830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.707 [2024-07-25 12:00:29.714870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.707 [2024-07-25 12:00:29.714887] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.707 [2024-07-25 12:00:29.716104] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.707 [2024-07-25 12:00:29.716151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.707 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:43.707 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.708 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.963 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.963 "name": "Existed_Raid", 00:18:43.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.963 "strip_size_kb": 64, 00:18:43.963 "state": "configuring", 00:18:43.963 "raid_level": "concat", 00:18:43.963 "superblock": false, 00:18:43.963 "num_base_bdevs": 4, 00:18:43.963 "num_base_bdevs_discovered": 3, 00:18:43.963 "num_base_bdevs_operational": 4, 00:18:43.963 "base_bdevs_list": [ 00:18:43.964 { 00:18:43.964 "name": "BaseBdev1", 00:18:43.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.964 "is_configured": false, 00:18:43.964 "data_offset": 0, 00:18:43.964 "data_size": 0 00:18:43.964 }, 00:18:43.964 { 00:18:43.964 "name": "BaseBdev2", 00:18:43.964 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:43.964 "is_configured": true, 00:18:43.964 "data_offset": 0, 00:18:43.964 "data_size": 65536 00:18:43.964 }, 00:18:43.964 { 00:18:43.964 "name": "BaseBdev3", 00:18:43.964 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:43.964 "is_configured": true, 00:18:43.964 "data_offset": 0, 00:18:43.964 "data_size": 65536 00:18:43.964 }, 00:18:43.964 { 00:18:43.964 "name": "BaseBdev4", 00:18:43.964 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:43.964 "is_configured": true, 00:18:43.964 "data_offset": 0, 00:18:43.964 "data_size": 65536 00:18:43.964 } 00:18:43.964 ] 00:18:43.964 }' 00:18:43.964 12:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.964 12:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.529 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:44.785 [2024-07-25 12:00:30.741674] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.785 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.041 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.041 "name": "Existed_Raid", 00:18:45.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.041 "strip_size_kb": 64, 00:18:45.041 "state": "configuring", 00:18:45.041 "raid_level": "concat", 00:18:45.041 "superblock": false, 00:18:45.041 "num_base_bdevs": 4, 00:18:45.041 "num_base_bdevs_discovered": 2, 00:18:45.041 "num_base_bdevs_operational": 4, 00:18:45.041 "base_bdevs_list": [ 00:18:45.041 { 00:18:45.041 "name": "BaseBdev1", 00:18:45.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.041 "is_configured": false, 00:18:45.041 "data_offset": 0, 00:18:45.041 "data_size": 0 00:18:45.041 }, 00:18:45.041 { 00:18:45.041 "name": null, 00:18:45.041 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:45.041 "is_configured": false, 00:18:45.041 "data_offset": 0, 00:18:45.041 "data_size": 65536 00:18:45.041 }, 00:18:45.041 { 00:18:45.041 "name": "BaseBdev3", 00:18:45.041 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:45.042 "is_configured": true, 00:18:45.042 "data_offset": 0, 00:18:45.042 "data_size": 65536 00:18:45.042 }, 00:18:45.042 { 00:18:45.042 "name": "BaseBdev4", 00:18:45.042 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:45.042 "is_configured": true, 00:18:45.042 "data_offset": 0, 00:18:45.042 "data_size": 65536 00:18:45.042 } 00:18:45.042 ] 00:18:45.042 }' 00:18:45.042 12:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.042 12:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.605 12:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.605 12:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:45.862 12:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:45.862 12:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:46.118 [2024-07-25 12:00:32.024137] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.118 BaseBdev1 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:46.118 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.375 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:46.375 [ 00:18:46.375 { 00:18:46.375 "name": "BaseBdev1", 00:18:46.375 "aliases": [ 00:18:46.375 "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02" 00:18:46.375 ], 00:18:46.375 "product_name": "Malloc disk", 00:18:46.375 "block_size": 512, 00:18:46.375 "num_blocks": 65536, 00:18:46.375 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:46.375 "assigned_rate_limits": { 00:18:46.375 "rw_ios_per_sec": 0, 00:18:46.376 "rw_mbytes_per_sec": 0, 00:18:46.376 "r_mbytes_per_sec": 0, 00:18:46.376 "w_mbytes_per_sec": 0 00:18:46.376 }, 00:18:46.376 "claimed": true, 00:18:46.376 "claim_type": "exclusive_write", 00:18:46.376 "zoned": false, 00:18:46.376 "supported_io_types": { 00:18:46.376 "read": true, 00:18:46.376 "write": true, 00:18:46.376 "unmap": true, 00:18:46.376 "flush": true, 00:18:46.376 "reset": true, 00:18:46.376 "nvme_admin": false, 00:18:46.376 "nvme_io": false, 00:18:46.376 "nvme_io_md": false, 00:18:46.376 "write_zeroes": true, 00:18:46.376 "zcopy": true, 00:18:46.376 "get_zone_info": false, 00:18:46.376 "zone_management": false, 00:18:46.376 "zone_append": false, 00:18:46.376 "compare": false, 00:18:46.376 "compare_and_write": false, 00:18:46.376 "abort": true, 00:18:46.376 "seek_hole": false, 00:18:46.376 "seek_data": false, 00:18:46.376 "copy": true, 00:18:46.376 "nvme_iov_md": false 00:18:46.376 }, 00:18:46.376 "memory_domains": [ 00:18:46.376 { 00:18:46.376 "dma_device_id": "system", 00:18:46.376 "dma_device_type": 1 00:18:46.376 }, 00:18:46.376 { 00:18:46.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.376 "dma_device_type": 2 00:18:46.376 } 00:18:46.376 ], 00:18:46.376 "driver_specific": {} 00:18:46.376 } 00:18:46.376 ] 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.376 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.634 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.634 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.634 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.634 "name": "Existed_Raid", 00:18:46.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.634 "strip_size_kb": 64, 00:18:46.634 "state": "configuring", 00:18:46.634 "raid_level": "concat", 00:18:46.634 "superblock": false, 00:18:46.634 "num_base_bdevs": 4, 00:18:46.634 "num_base_bdevs_discovered": 3, 00:18:46.634 "num_base_bdevs_operational": 4, 00:18:46.634 "base_bdevs_list": [ 00:18:46.634 { 00:18:46.634 "name": "BaseBdev1", 00:18:46.634 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:46.634 "is_configured": true, 00:18:46.634 "data_offset": 0, 00:18:46.634 "data_size": 65536 00:18:46.634 }, 00:18:46.634 { 00:18:46.634 "name": null, 00:18:46.634 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:46.634 "is_configured": false, 00:18:46.634 "data_offset": 0, 00:18:46.634 "data_size": 65536 00:18:46.634 }, 00:18:46.634 { 00:18:46.634 "name": "BaseBdev3", 00:18:46.634 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:46.634 "is_configured": true, 00:18:46.634 "data_offset": 0, 00:18:46.634 "data_size": 65536 00:18:46.634 }, 00:18:46.634 { 00:18:46.634 "name": "BaseBdev4", 00:18:46.634 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:46.634 "is_configured": true, 00:18:46.634 "data_offset": 0, 00:18:46.634 "data_size": 65536 00:18:46.634 } 00:18:46.634 ] 00:18:46.634 }' 00:18:46.634 12:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.634 12:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.198 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.198 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:47.454 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:47.454 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:47.710 [2024-07-25 12:00:33.732666] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.710 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.967 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.967 "name": "Existed_Raid", 00:18:47.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.967 "strip_size_kb": 64, 00:18:47.967 "state": "configuring", 00:18:47.967 "raid_level": "concat", 00:18:47.967 "superblock": false, 00:18:47.967 "num_base_bdevs": 4, 00:18:47.967 "num_base_bdevs_discovered": 2, 00:18:47.967 "num_base_bdevs_operational": 4, 00:18:47.967 "base_bdevs_list": [ 00:18:47.967 { 00:18:47.967 "name": "BaseBdev1", 00:18:47.967 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:47.967 "is_configured": true, 00:18:47.967 "data_offset": 0, 00:18:47.967 "data_size": 65536 00:18:47.967 }, 00:18:47.967 { 00:18:47.967 "name": null, 00:18:47.967 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:47.967 "is_configured": false, 00:18:47.967 "data_offset": 0, 00:18:47.967 "data_size": 65536 00:18:47.967 }, 00:18:47.967 { 00:18:47.967 "name": null, 00:18:47.967 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:47.967 "is_configured": false, 00:18:47.967 "data_offset": 0, 00:18:47.967 "data_size": 65536 00:18:47.967 }, 00:18:47.967 { 00:18:47.967 "name": "BaseBdev4", 00:18:47.967 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:47.967 "is_configured": true, 00:18:47.967 "data_offset": 0, 00:18:47.967 "data_size": 65536 00:18:47.967 } 00:18:47.967 ] 00:18:47.967 }' 00:18:47.967 12:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.967 12:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.531 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.531 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:48.790 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:48.790 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:49.048 [2024-07-25 12:00:34.923846] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.048 12:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.305 12:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.305 "name": "Existed_Raid", 00:18:49.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.305 "strip_size_kb": 64, 00:18:49.305 "state": "configuring", 00:18:49.305 "raid_level": "concat", 00:18:49.305 "superblock": false, 00:18:49.305 "num_base_bdevs": 4, 00:18:49.305 "num_base_bdevs_discovered": 3, 00:18:49.305 "num_base_bdevs_operational": 4, 00:18:49.305 "base_bdevs_list": [ 00:18:49.305 { 00:18:49.305 "name": "BaseBdev1", 00:18:49.305 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:49.305 "is_configured": true, 00:18:49.305 "data_offset": 0, 00:18:49.305 "data_size": 65536 00:18:49.305 }, 00:18:49.305 { 00:18:49.305 "name": null, 00:18:49.305 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:49.305 "is_configured": false, 00:18:49.305 "data_offset": 0, 00:18:49.305 "data_size": 65536 00:18:49.305 }, 00:18:49.305 { 00:18:49.305 "name": "BaseBdev3", 00:18:49.305 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:49.305 "is_configured": true, 00:18:49.305 "data_offset": 0, 00:18:49.305 "data_size": 65536 00:18:49.305 }, 00:18:49.306 { 00:18:49.306 "name": "BaseBdev4", 00:18:49.306 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:49.306 "is_configured": true, 00:18:49.306 "data_offset": 0, 00:18:49.306 "data_size": 65536 00:18:49.306 } 00:18:49.306 ] 00:18:49.306 }' 00:18:49.306 12:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.306 12:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.869 12:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.869 12:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:49.869 12:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:49.869 12:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:50.126 [2024-07-25 12:00:36.195211] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.126 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.382 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.382 "name": "Existed_Raid", 00:18:50.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.382 "strip_size_kb": 64, 00:18:50.382 "state": "configuring", 00:18:50.382 "raid_level": "concat", 00:18:50.382 "superblock": false, 00:18:50.382 "num_base_bdevs": 4, 00:18:50.382 "num_base_bdevs_discovered": 2, 00:18:50.382 "num_base_bdevs_operational": 4, 00:18:50.382 "base_bdevs_list": [ 00:18:50.382 { 00:18:50.382 "name": null, 00:18:50.382 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:50.382 "is_configured": false, 00:18:50.382 "data_offset": 0, 00:18:50.382 "data_size": 65536 00:18:50.382 }, 00:18:50.382 { 00:18:50.382 "name": null, 00:18:50.382 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:50.382 "is_configured": false, 00:18:50.382 "data_offset": 0, 00:18:50.382 "data_size": 65536 00:18:50.382 }, 00:18:50.382 { 00:18:50.382 "name": "BaseBdev3", 00:18:50.382 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:50.382 "is_configured": true, 00:18:50.382 "data_offset": 0, 00:18:50.382 "data_size": 65536 00:18:50.382 }, 00:18:50.382 { 00:18:50.382 "name": "BaseBdev4", 00:18:50.382 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:50.382 "is_configured": true, 00:18:50.382 "data_offset": 0, 00:18:50.382 "data_size": 65536 00:18:50.382 } 00:18:50.382 ] 00:18:50.382 }' 00:18:50.382 12:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.382 12:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.944 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.945 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:51.243 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:51.243 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:51.506 [2024-07-25 12:00:37.476838] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.506 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.762 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.762 "name": "Existed_Raid", 00:18:51.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.762 "strip_size_kb": 64, 00:18:51.762 "state": "configuring", 00:18:51.762 "raid_level": "concat", 00:18:51.762 "superblock": false, 00:18:51.762 "num_base_bdevs": 4, 00:18:51.762 "num_base_bdevs_discovered": 3, 00:18:51.762 "num_base_bdevs_operational": 4, 00:18:51.762 "base_bdevs_list": [ 00:18:51.762 { 00:18:51.762 "name": null, 00:18:51.762 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:51.762 "is_configured": false, 00:18:51.762 "data_offset": 0, 00:18:51.762 "data_size": 65536 00:18:51.762 }, 00:18:51.762 { 00:18:51.762 "name": "BaseBdev2", 00:18:51.762 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:51.762 "is_configured": true, 00:18:51.762 "data_offset": 0, 00:18:51.762 "data_size": 65536 00:18:51.762 }, 00:18:51.762 { 00:18:51.762 "name": "BaseBdev3", 00:18:51.762 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:51.762 "is_configured": true, 00:18:51.762 "data_offset": 0, 00:18:51.762 "data_size": 65536 00:18:51.762 }, 00:18:51.762 { 00:18:51.762 "name": "BaseBdev4", 00:18:51.762 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:51.762 "is_configured": true, 00:18:51.762 "data_offset": 0, 00:18:51.762 "data_size": 65536 00:18:51.762 } 00:18:51.762 ] 00:18:51.762 }' 00:18:51.762 12:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.762 12:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.323 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.323 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:52.579 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:52.579 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:52.579 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.836 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02 00:18:52.836 [2024-07-25 12:00:38.947834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:52.836 [2024-07-25 12:00:38.947868] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x17526f0 00:18:52.836 [2024-07-25 12:00:38.947876] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:52.836 [2024-07-25 12:00:38.948059] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x175e3d0 00:18:52.836 [2024-07-25 12:00:38.948176] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17526f0 00:18:52.836 [2024-07-25 12:00:38.948185] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17526f0 00:18:52.836 [2024-07-25 12:00:38.948336] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.836 NewBaseBdev 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:53.093 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.093 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:53.349 [ 00:18:53.349 { 00:18:53.349 "name": "NewBaseBdev", 00:18:53.349 "aliases": [ 00:18:53.349 "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02" 00:18:53.349 ], 00:18:53.349 "product_name": "Malloc disk", 00:18:53.349 "block_size": 512, 00:18:53.349 "num_blocks": 65536, 00:18:53.349 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:53.349 "assigned_rate_limits": { 00:18:53.349 "rw_ios_per_sec": 0, 00:18:53.349 "rw_mbytes_per_sec": 0, 00:18:53.349 "r_mbytes_per_sec": 0, 00:18:53.349 "w_mbytes_per_sec": 0 00:18:53.349 }, 00:18:53.349 "claimed": true, 00:18:53.349 "claim_type": "exclusive_write", 00:18:53.349 "zoned": false, 00:18:53.349 "supported_io_types": { 00:18:53.349 "read": true, 00:18:53.349 "write": true, 00:18:53.349 "unmap": true, 00:18:53.349 "flush": true, 00:18:53.349 "reset": true, 00:18:53.349 "nvme_admin": false, 00:18:53.349 "nvme_io": false, 00:18:53.349 "nvme_io_md": false, 00:18:53.349 "write_zeroes": true, 00:18:53.349 "zcopy": true, 00:18:53.349 "get_zone_info": false, 00:18:53.349 "zone_management": false, 00:18:53.349 "zone_append": false, 00:18:53.349 "compare": false, 00:18:53.349 "compare_and_write": false, 00:18:53.349 "abort": true, 00:18:53.349 "seek_hole": false, 00:18:53.349 "seek_data": false, 00:18:53.349 "copy": true, 00:18:53.349 "nvme_iov_md": false 00:18:53.349 }, 00:18:53.349 "memory_domains": [ 00:18:53.349 { 00:18:53.349 "dma_device_id": "system", 00:18:53.349 "dma_device_type": 1 00:18:53.349 }, 00:18:53.349 { 00:18:53.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.349 "dma_device_type": 2 00:18:53.349 } 00:18:53.349 ], 00:18:53.349 "driver_specific": {} 00:18:53.349 } 00:18:53.349 ] 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.349 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.606 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.606 "name": "Existed_Raid", 00:18:53.606 "uuid": "3dcdbb59-a631-477d-b5a6-749c96f2c114", 00:18:53.606 "strip_size_kb": 64, 00:18:53.606 "state": "online", 00:18:53.606 "raid_level": "concat", 00:18:53.606 "superblock": false, 00:18:53.606 "num_base_bdevs": 4, 00:18:53.606 "num_base_bdevs_discovered": 4, 00:18:53.606 "num_base_bdevs_operational": 4, 00:18:53.606 "base_bdevs_list": [ 00:18:53.606 { 00:18:53.606 "name": "NewBaseBdev", 00:18:53.606 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:53.606 "is_configured": true, 00:18:53.606 "data_offset": 0, 00:18:53.606 "data_size": 65536 00:18:53.606 }, 00:18:53.606 { 00:18:53.606 "name": "BaseBdev2", 00:18:53.606 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:53.606 "is_configured": true, 00:18:53.606 "data_offset": 0, 00:18:53.606 "data_size": 65536 00:18:53.606 }, 00:18:53.606 { 00:18:53.606 "name": "BaseBdev3", 00:18:53.606 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:53.606 "is_configured": true, 00:18:53.606 "data_offset": 0, 00:18:53.606 "data_size": 65536 00:18:53.606 }, 00:18:53.606 { 00:18:53.606 "name": "BaseBdev4", 00:18:53.606 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:53.606 "is_configured": true, 00:18:53.606 "data_offset": 0, 00:18:53.606 "data_size": 65536 00:18:53.606 } 00:18:53.606 ] 00:18:53.606 }' 00:18:53.606 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.606 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:54.168 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:54.169 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:54.425 [2024-07-25 12:00:40.391963] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.425 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:54.425 "name": "Existed_Raid", 00:18:54.425 "aliases": [ 00:18:54.425 "3dcdbb59-a631-477d-b5a6-749c96f2c114" 00:18:54.425 ], 00:18:54.425 "product_name": "Raid Volume", 00:18:54.425 "block_size": 512, 00:18:54.425 "num_blocks": 262144, 00:18:54.425 "uuid": "3dcdbb59-a631-477d-b5a6-749c96f2c114", 00:18:54.425 "assigned_rate_limits": { 00:18:54.425 "rw_ios_per_sec": 0, 00:18:54.425 "rw_mbytes_per_sec": 0, 00:18:54.425 "r_mbytes_per_sec": 0, 00:18:54.425 "w_mbytes_per_sec": 0 00:18:54.425 }, 00:18:54.425 "claimed": false, 00:18:54.425 "zoned": false, 00:18:54.425 "supported_io_types": { 00:18:54.425 "read": true, 00:18:54.425 "write": true, 00:18:54.425 "unmap": true, 00:18:54.425 "flush": true, 00:18:54.425 "reset": true, 00:18:54.425 "nvme_admin": false, 00:18:54.425 "nvme_io": false, 00:18:54.425 "nvme_io_md": false, 00:18:54.425 "write_zeroes": true, 00:18:54.425 "zcopy": false, 00:18:54.425 "get_zone_info": false, 00:18:54.425 "zone_management": false, 00:18:54.425 "zone_append": false, 00:18:54.425 "compare": false, 00:18:54.425 "compare_and_write": false, 00:18:54.425 "abort": false, 00:18:54.425 "seek_hole": false, 00:18:54.425 "seek_data": false, 00:18:54.425 "copy": false, 00:18:54.425 "nvme_iov_md": false 00:18:54.425 }, 00:18:54.425 "memory_domains": [ 00:18:54.425 { 00:18:54.425 "dma_device_id": "system", 00:18:54.425 "dma_device_type": 1 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.425 "dma_device_type": 2 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "system", 00:18:54.425 "dma_device_type": 1 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.425 "dma_device_type": 2 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "system", 00:18:54.425 "dma_device_type": 1 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.425 "dma_device_type": 2 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "system", 00:18:54.425 "dma_device_type": 1 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.425 "dma_device_type": 2 00:18:54.425 } 00:18:54.425 ], 00:18:54.425 "driver_specific": { 00:18:54.425 "raid": { 00:18:54.425 "uuid": "3dcdbb59-a631-477d-b5a6-749c96f2c114", 00:18:54.425 "strip_size_kb": 64, 00:18:54.425 "state": "online", 00:18:54.425 "raid_level": "concat", 00:18:54.425 "superblock": false, 00:18:54.425 "num_base_bdevs": 4, 00:18:54.425 "num_base_bdevs_discovered": 4, 00:18:54.425 "num_base_bdevs_operational": 4, 00:18:54.425 "base_bdevs_list": [ 00:18:54.425 { 00:18:54.425 "name": "NewBaseBdev", 00:18:54.425 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:54.425 "is_configured": true, 00:18:54.425 "data_offset": 0, 00:18:54.425 "data_size": 65536 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "name": "BaseBdev2", 00:18:54.425 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:54.425 "is_configured": true, 00:18:54.425 "data_offset": 0, 00:18:54.425 "data_size": 65536 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "name": "BaseBdev3", 00:18:54.425 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:54.425 "is_configured": true, 00:18:54.425 "data_offset": 0, 00:18:54.425 "data_size": 65536 00:18:54.425 }, 00:18:54.425 { 00:18:54.425 "name": "BaseBdev4", 00:18:54.425 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:54.425 "is_configured": true, 00:18:54.425 "data_offset": 0, 00:18:54.425 "data_size": 65536 00:18:54.425 } 00:18:54.425 ] 00:18:54.425 } 00:18:54.425 } 00:18:54.425 }' 00:18:54.425 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.425 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:54.425 BaseBdev2 00:18:54.425 BaseBdev3 00:18:54.425 BaseBdev4' 00:18:54.425 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.425 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:54.425 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.683 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.683 "name": "NewBaseBdev", 00:18:54.683 "aliases": [ 00:18:54.683 "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02" 00:18:54.683 ], 00:18:54.683 "product_name": "Malloc disk", 00:18:54.683 "block_size": 512, 00:18:54.683 "num_blocks": 65536, 00:18:54.683 "uuid": "6cc9aa7f-72f6-4fe2-b6c4-2824108c5c02", 00:18:54.683 "assigned_rate_limits": { 00:18:54.683 "rw_ios_per_sec": 0, 00:18:54.683 "rw_mbytes_per_sec": 0, 00:18:54.683 "r_mbytes_per_sec": 0, 00:18:54.683 "w_mbytes_per_sec": 0 00:18:54.683 }, 00:18:54.683 "claimed": true, 00:18:54.683 "claim_type": "exclusive_write", 00:18:54.683 "zoned": false, 00:18:54.683 "supported_io_types": { 00:18:54.683 "read": true, 00:18:54.683 "write": true, 00:18:54.683 "unmap": true, 00:18:54.683 "flush": true, 00:18:54.683 "reset": true, 00:18:54.683 "nvme_admin": false, 00:18:54.683 "nvme_io": false, 00:18:54.683 "nvme_io_md": false, 00:18:54.683 "write_zeroes": true, 00:18:54.683 "zcopy": true, 00:18:54.683 "get_zone_info": false, 00:18:54.683 "zone_management": false, 00:18:54.683 "zone_append": false, 00:18:54.683 "compare": false, 00:18:54.683 "compare_and_write": false, 00:18:54.683 "abort": true, 00:18:54.683 "seek_hole": false, 00:18:54.683 "seek_data": false, 00:18:54.683 "copy": true, 00:18:54.683 "nvme_iov_md": false 00:18:54.683 }, 00:18:54.683 "memory_domains": [ 00:18:54.683 { 00:18:54.683 "dma_device_id": "system", 00:18:54.683 "dma_device_type": 1 00:18:54.683 }, 00:18:54.683 { 00:18:54.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.683 "dma_device_type": 2 00:18:54.683 } 00:18:54.683 ], 00:18:54.683 "driver_specific": {} 00:18:54.683 }' 00:18:54.683 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.683 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.683 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.683 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.940 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.940 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:54.940 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.940 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.940 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:55.196 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.196 "name": "BaseBdev2", 00:18:55.196 "aliases": [ 00:18:55.196 "d29cbc25-521f-4035-a9d2-3f2e7f4912d5" 00:18:55.196 ], 00:18:55.196 "product_name": "Malloc disk", 00:18:55.196 "block_size": 512, 00:18:55.196 "num_blocks": 65536, 00:18:55.196 "uuid": "d29cbc25-521f-4035-a9d2-3f2e7f4912d5", 00:18:55.196 "assigned_rate_limits": { 00:18:55.196 "rw_ios_per_sec": 0, 00:18:55.196 "rw_mbytes_per_sec": 0, 00:18:55.196 "r_mbytes_per_sec": 0, 00:18:55.196 "w_mbytes_per_sec": 0 00:18:55.196 }, 00:18:55.196 "claimed": true, 00:18:55.196 "claim_type": "exclusive_write", 00:18:55.196 "zoned": false, 00:18:55.196 "supported_io_types": { 00:18:55.196 "read": true, 00:18:55.196 "write": true, 00:18:55.196 "unmap": true, 00:18:55.196 "flush": true, 00:18:55.196 "reset": true, 00:18:55.196 "nvme_admin": false, 00:18:55.196 "nvme_io": false, 00:18:55.196 "nvme_io_md": false, 00:18:55.196 "write_zeroes": true, 00:18:55.196 "zcopy": true, 00:18:55.196 "get_zone_info": false, 00:18:55.196 "zone_management": false, 00:18:55.196 "zone_append": false, 00:18:55.196 "compare": false, 00:18:55.196 "compare_and_write": false, 00:18:55.196 "abort": true, 00:18:55.196 "seek_hole": false, 00:18:55.196 "seek_data": false, 00:18:55.196 "copy": true, 00:18:55.196 "nvme_iov_md": false 00:18:55.196 }, 00:18:55.196 "memory_domains": [ 00:18:55.196 { 00:18:55.196 "dma_device_id": "system", 00:18:55.196 "dma_device_type": 1 00:18:55.196 }, 00:18:55.196 { 00:18:55.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.196 "dma_device_type": 2 00:18:55.196 } 00:18:55.196 ], 00:18:55.196 "driver_specific": {} 00:18:55.196 }' 00:18:55.196 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.197 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.454 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.710 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.710 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:55.710 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:55.710 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:55.710 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.710 "name": "BaseBdev3", 00:18:55.710 "aliases": [ 00:18:55.710 "5f6ea788-8edd-4afd-a705-dfee5286d1a8" 00:18:55.711 ], 00:18:55.711 "product_name": "Malloc disk", 00:18:55.711 "block_size": 512, 00:18:55.711 "num_blocks": 65536, 00:18:55.711 "uuid": "5f6ea788-8edd-4afd-a705-dfee5286d1a8", 00:18:55.711 "assigned_rate_limits": { 00:18:55.711 "rw_ios_per_sec": 0, 00:18:55.711 "rw_mbytes_per_sec": 0, 00:18:55.711 "r_mbytes_per_sec": 0, 00:18:55.711 "w_mbytes_per_sec": 0 00:18:55.711 }, 00:18:55.711 "claimed": true, 00:18:55.711 "claim_type": "exclusive_write", 00:18:55.711 "zoned": false, 00:18:55.711 "supported_io_types": { 00:18:55.711 "read": true, 00:18:55.711 "write": true, 00:18:55.711 "unmap": true, 00:18:55.711 "flush": true, 00:18:55.711 "reset": true, 00:18:55.711 "nvme_admin": false, 00:18:55.711 "nvme_io": false, 00:18:55.711 "nvme_io_md": false, 00:18:55.711 "write_zeroes": true, 00:18:55.711 "zcopy": true, 00:18:55.711 "get_zone_info": false, 00:18:55.711 "zone_management": false, 00:18:55.711 "zone_append": false, 00:18:55.711 "compare": false, 00:18:55.711 "compare_and_write": false, 00:18:55.711 "abort": true, 00:18:55.711 "seek_hole": false, 00:18:55.711 "seek_data": false, 00:18:55.711 "copy": true, 00:18:55.711 "nvme_iov_md": false 00:18:55.711 }, 00:18:55.711 "memory_domains": [ 00:18:55.711 { 00:18:55.711 "dma_device_id": "system", 00:18:55.711 "dma_device_type": 1 00:18:55.711 }, 00:18:55.711 { 00:18:55.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.711 "dma_device_type": 2 00:18:55.711 } 00:18:55.711 ], 00:18:55.711 "driver_specific": {} 00:18:55.711 }' 00:18:55.711 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.967 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.967 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.967 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.967 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.967 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.967 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.967 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.967 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.967 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.224 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.224 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.224 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:56.224 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:56.224 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:56.481 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:56.481 "name": "BaseBdev4", 00:18:56.481 "aliases": [ 00:18:56.481 "008d249a-266a-4434-9f2e-803562311153" 00:18:56.481 ], 00:18:56.481 "product_name": "Malloc disk", 00:18:56.481 "block_size": 512, 00:18:56.481 "num_blocks": 65536, 00:18:56.481 "uuid": "008d249a-266a-4434-9f2e-803562311153", 00:18:56.481 "assigned_rate_limits": { 00:18:56.481 "rw_ios_per_sec": 0, 00:18:56.481 "rw_mbytes_per_sec": 0, 00:18:56.482 "r_mbytes_per_sec": 0, 00:18:56.482 "w_mbytes_per_sec": 0 00:18:56.482 }, 00:18:56.482 "claimed": true, 00:18:56.482 "claim_type": "exclusive_write", 00:18:56.482 "zoned": false, 00:18:56.482 "supported_io_types": { 00:18:56.482 "read": true, 00:18:56.482 "write": true, 00:18:56.482 "unmap": true, 00:18:56.482 "flush": true, 00:18:56.482 "reset": true, 00:18:56.482 "nvme_admin": false, 00:18:56.482 "nvme_io": false, 00:18:56.482 "nvme_io_md": false, 00:18:56.482 "write_zeroes": true, 00:18:56.482 "zcopy": true, 00:18:56.482 "get_zone_info": false, 00:18:56.482 "zone_management": false, 00:18:56.482 "zone_append": false, 00:18:56.482 "compare": false, 00:18:56.482 "compare_and_write": false, 00:18:56.482 "abort": true, 00:18:56.482 "seek_hole": false, 00:18:56.482 "seek_data": false, 00:18:56.482 "copy": true, 00:18:56.482 "nvme_iov_md": false 00:18:56.482 }, 00:18:56.482 "memory_domains": [ 00:18:56.482 { 00:18:56.482 "dma_device_id": "system", 00:18:56.482 "dma_device_type": 1 00:18:56.482 }, 00:18:56.482 { 00:18:56.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.482 "dma_device_type": 2 00:18:56.482 } 00:18:56.482 ], 00:18:56.482 "driver_specific": {} 00:18:56.482 }' 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:56.482 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.737 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.737 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.737 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.737 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.737 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.737 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:56.994 [2024-07-25 12:00:42.946399] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.994 [2024-07-25 12:00:42.946424] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.994 [2024-07-25 12:00:42.946476] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.994 [2024-07-25 12:00:42.946530] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.994 [2024-07-25 12:00:42.946541] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17526f0 name Existed_Raid, state offline 00:18:56.994 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 4183481 00:18:56.994 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 4183481 ']' 00:18:56.994 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 4183481 00:18:56.994 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:56.994 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.994 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4183481 00:18:56.994 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.994 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.994 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4183481' 00:18:56.994 killing process with pid 4183481 00:18:56.994 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 4183481 00:18:56.994 [2024-07-25 12:00:43.027002] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.994 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 4183481 00:18:56.994 [2024-07-25 12:00:43.058606] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.251 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:57.251 00:18:57.251 real 0m30.483s 00:18:57.251 user 0m55.897s 00:18:57.251 sys 0m5.607s 00:18:57.251 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:57.251 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.251 ************************************ 00:18:57.251 END TEST raid_state_function_test 00:18:57.251 ************************************ 00:18:57.252 12:00:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:57.252 12:00:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:57.252 12:00:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:57.252 12:00:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.252 ************************************ 00:18:57.252 START TEST raid_state_function_test_sb 00:18:57.252 ************************************ 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=4189236 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 4189236' 00:18:57.252 Process raid pid: 4189236 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 4189236 /var/tmp/spdk-raid.sock 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 4189236 ']' 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:57.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.252 12:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.510 [2024-07-25 12:00:43.382524] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:18:57.510 [2024-07-25 12:00:43.382578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:57.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.510 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:57.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.511 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:57.511 [2024-07-25 12:00:43.514323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.511 [2024-07-25 12:00:43.600689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.769 [2024-07-25 12:00:43.660163] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.769 [2024-07-25 12:00:43.660218] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.332 12:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.332 12:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:58.332 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:58.589 [2024-07-25 12:00:44.502452] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.589 [2024-07-25 12:00:44.502487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.589 [2024-07-25 12:00:44.502497] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.589 [2024-07-25 12:00:44.502508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.589 [2024-07-25 12:00:44.502516] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.589 [2024-07-25 12:00:44.502526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.589 [2024-07-25 12:00:44.502534] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:58.589 [2024-07-25 12:00:44.502544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.589 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.847 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.847 "name": "Existed_Raid", 00:18:58.847 "uuid": "0434c04d-0988-473e-be66-0b542c25a21a", 00:18:58.847 "strip_size_kb": 64, 00:18:58.847 "state": "configuring", 00:18:58.847 "raid_level": "concat", 00:18:58.847 "superblock": true, 00:18:58.847 "num_base_bdevs": 4, 00:18:58.847 "num_base_bdevs_discovered": 0, 00:18:58.847 "num_base_bdevs_operational": 4, 00:18:58.847 "base_bdevs_list": [ 00:18:58.847 { 00:18:58.847 "name": "BaseBdev1", 00:18:58.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.847 "is_configured": false, 00:18:58.847 "data_offset": 0, 00:18:58.847 "data_size": 0 00:18:58.847 }, 00:18:58.847 { 00:18:58.847 "name": "BaseBdev2", 00:18:58.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.847 "is_configured": false, 00:18:58.847 "data_offset": 0, 00:18:58.847 "data_size": 0 00:18:58.847 }, 00:18:58.847 { 00:18:58.847 "name": "BaseBdev3", 00:18:58.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.847 "is_configured": false, 00:18:58.847 "data_offset": 0, 00:18:58.847 "data_size": 0 00:18:58.847 }, 00:18:58.847 { 00:18:58.847 "name": "BaseBdev4", 00:18:58.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.847 "is_configured": false, 00:18:58.847 "data_offset": 0, 00:18:58.847 "data_size": 0 00:18:58.847 } 00:18:58.847 ] 00:18:58.847 }' 00:18:58.847 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.847 12:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.777 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:59.777 [2024-07-25 12:00:45.801699] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.777 [2024-07-25 12:00:45.801728] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1464f60 name Existed_Raid, state configuring 00:18:59.777 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.033 [2024-07-25 12:00:46.030329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.033 [2024-07-25 12:00:46.030355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.033 [2024-07-25 12:00:46.030365] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.033 [2024-07-25 12:00:46.030375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.033 [2024-07-25 12:00:46.030383] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.033 [2024-07-25 12:00:46.030393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.033 [2024-07-25 12:00:46.030401] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.033 [2024-07-25 12:00:46.030411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.033 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:00.290 [2024-07-25 12:00:46.220300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.290 BaseBdev1 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:00.290 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.546 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.802 [ 00:19:00.802 { 00:19:00.802 "name": "BaseBdev1", 00:19:00.802 "aliases": [ 00:19:00.802 "b9fd503d-272c-4de8-a1ed-3221b5af074f" 00:19:00.802 ], 00:19:00.802 "product_name": "Malloc disk", 00:19:00.802 "block_size": 512, 00:19:00.802 "num_blocks": 65536, 00:19:00.802 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:00.802 "assigned_rate_limits": { 00:19:00.802 "rw_ios_per_sec": 0, 00:19:00.802 "rw_mbytes_per_sec": 0, 00:19:00.802 "r_mbytes_per_sec": 0, 00:19:00.802 "w_mbytes_per_sec": 0 00:19:00.802 }, 00:19:00.802 "claimed": true, 00:19:00.802 "claim_type": "exclusive_write", 00:19:00.802 "zoned": false, 00:19:00.802 "supported_io_types": { 00:19:00.802 "read": true, 00:19:00.802 "write": true, 00:19:00.802 "unmap": true, 00:19:00.802 "flush": true, 00:19:00.802 "reset": true, 00:19:00.802 "nvme_admin": false, 00:19:00.802 "nvme_io": false, 00:19:00.802 "nvme_io_md": false, 00:19:00.802 "write_zeroes": true, 00:19:00.802 "zcopy": true, 00:19:00.802 "get_zone_info": false, 00:19:00.802 "zone_management": false, 00:19:00.802 "zone_append": false, 00:19:00.802 "compare": false, 00:19:00.802 "compare_and_write": false, 00:19:00.802 "abort": true, 00:19:00.802 "seek_hole": false, 00:19:00.802 "seek_data": false, 00:19:00.802 "copy": true, 00:19:00.802 "nvme_iov_md": false 00:19:00.802 }, 00:19:00.802 "memory_domains": [ 00:19:00.802 { 00:19:00.802 "dma_device_id": "system", 00:19:00.802 "dma_device_type": 1 00:19:00.802 }, 00:19:00.802 { 00:19:00.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.802 "dma_device_type": 2 00:19:00.802 } 00:19:00.802 ], 00:19:00.802 "driver_specific": {} 00:19:00.802 } 00:19:00.802 ] 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.802 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.059 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.059 "name": "Existed_Raid", 00:19:01.059 "uuid": "442090dc-a29f-43e3-98e0-0516b9e50b75", 00:19:01.059 "strip_size_kb": 64, 00:19:01.059 "state": "configuring", 00:19:01.059 "raid_level": "concat", 00:19:01.059 "superblock": true, 00:19:01.059 "num_base_bdevs": 4, 00:19:01.059 "num_base_bdevs_discovered": 1, 00:19:01.059 "num_base_bdevs_operational": 4, 00:19:01.059 "base_bdevs_list": [ 00:19:01.059 { 00:19:01.059 "name": "BaseBdev1", 00:19:01.059 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:01.059 "is_configured": true, 00:19:01.059 "data_offset": 2048, 00:19:01.059 "data_size": 63488 00:19:01.059 }, 00:19:01.059 { 00:19:01.059 "name": "BaseBdev2", 00:19:01.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.059 "is_configured": false, 00:19:01.059 "data_offset": 0, 00:19:01.059 "data_size": 0 00:19:01.059 }, 00:19:01.059 { 00:19:01.059 "name": "BaseBdev3", 00:19:01.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.059 "is_configured": false, 00:19:01.059 "data_offset": 0, 00:19:01.059 "data_size": 0 00:19:01.059 }, 00:19:01.059 { 00:19:01.059 "name": "BaseBdev4", 00:19:01.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.059 "is_configured": false, 00:19:01.059 "data_offset": 0, 00:19:01.059 "data_size": 0 00:19:01.059 } 00:19:01.059 ] 00:19:01.059 }' 00:19:01.059 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.059 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.623 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.623 [2024-07-25 12:00:47.700195] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.623 [2024-07-25 12:00:47.700229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14647d0 name Existed_Raid, state configuring 00:19:01.623 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:01.881 [2024-07-25 12:00:47.876709] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.881 [2024-07-25 12:00:47.878091] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.881 [2024-07-25 12:00:47.878122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.881 [2024-07-25 12:00:47.878132] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.881 [2024-07-25 12:00:47.878151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.881 [2024-07-25 12:00:47.878159] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:01.881 [2024-07-25 12:00:47.878169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.881 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.147 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.147 "name": "Existed_Raid", 00:19:02.147 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:02.147 "strip_size_kb": 64, 00:19:02.147 "state": "configuring", 00:19:02.147 "raid_level": "concat", 00:19:02.147 "superblock": true, 00:19:02.147 "num_base_bdevs": 4, 00:19:02.147 "num_base_bdevs_discovered": 1, 00:19:02.147 "num_base_bdevs_operational": 4, 00:19:02.147 "base_bdevs_list": [ 00:19:02.147 { 00:19:02.147 "name": "BaseBdev1", 00:19:02.147 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:02.147 "is_configured": true, 00:19:02.147 "data_offset": 2048, 00:19:02.147 "data_size": 63488 00:19:02.147 }, 00:19:02.147 { 00:19:02.147 "name": "BaseBdev2", 00:19:02.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.147 "is_configured": false, 00:19:02.147 "data_offset": 0, 00:19:02.147 "data_size": 0 00:19:02.147 }, 00:19:02.147 { 00:19:02.147 "name": "BaseBdev3", 00:19:02.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.147 "is_configured": false, 00:19:02.147 "data_offset": 0, 00:19:02.147 "data_size": 0 00:19:02.147 }, 00:19:02.147 { 00:19:02.147 "name": "BaseBdev4", 00:19:02.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.147 "is_configured": false, 00:19:02.147 "data_offset": 0, 00:19:02.147 "data_size": 0 00:19:02.147 } 00:19:02.147 ] 00:19:02.147 }' 00:19:02.147 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.147 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.716 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.971 [2024-07-25 12:00:48.842363] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.971 BaseBdev2 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:02.971 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.971 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:03.228 [ 00:19:03.228 { 00:19:03.228 "name": "BaseBdev2", 00:19:03.228 "aliases": [ 00:19:03.228 "f8fae47e-7ada-40c7-a89f-7832425a6281" 00:19:03.228 ], 00:19:03.228 "product_name": "Malloc disk", 00:19:03.228 "block_size": 512, 00:19:03.228 "num_blocks": 65536, 00:19:03.228 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:03.228 "assigned_rate_limits": { 00:19:03.228 "rw_ios_per_sec": 0, 00:19:03.228 "rw_mbytes_per_sec": 0, 00:19:03.228 "r_mbytes_per_sec": 0, 00:19:03.228 "w_mbytes_per_sec": 0 00:19:03.228 }, 00:19:03.228 "claimed": true, 00:19:03.228 "claim_type": "exclusive_write", 00:19:03.228 "zoned": false, 00:19:03.228 "supported_io_types": { 00:19:03.228 "read": true, 00:19:03.228 "write": true, 00:19:03.228 "unmap": true, 00:19:03.228 "flush": true, 00:19:03.228 "reset": true, 00:19:03.228 "nvme_admin": false, 00:19:03.228 "nvme_io": false, 00:19:03.228 "nvme_io_md": false, 00:19:03.228 "write_zeroes": true, 00:19:03.228 "zcopy": true, 00:19:03.228 "get_zone_info": false, 00:19:03.228 "zone_management": false, 00:19:03.228 "zone_append": false, 00:19:03.228 "compare": false, 00:19:03.228 "compare_and_write": false, 00:19:03.228 "abort": true, 00:19:03.228 "seek_hole": false, 00:19:03.228 "seek_data": false, 00:19:03.228 "copy": true, 00:19:03.228 "nvme_iov_md": false 00:19:03.228 }, 00:19:03.228 "memory_domains": [ 00:19:03.228 { 00:19:03.228 "dma_device_id": "system", 00:19:03.228 "dma_device_type": 1 00:19:03.228 }, 00:19:03.228 { 00:19:03.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.228 "dma_device_type": 2 00:19:03.228 } 00:19:03.228 ], 00:19:03.228 "driver_specific": {} 00:19:03.228 } 00:19:03.228 ] 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.228 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.485 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.485 "name": "Existed_Raid", 00:19:03.485 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:03.485 "strip_size_kb": 64, 00:19:03.485 "state": "configuring", 00:19:03.485 "raid_level": "concat", 00:19:03.485 "superblock": true, 00:19:03.485 "num_base_bdevs": 4, 00:19:03.485 "num_base_bdevs_discovered": 2, 00:19:03.485 "num_base_bdevs_operational": 4, 00:19:03.485 "base_bdevs_list": [ 00:19:03.485 { 00:19:03.485 "name": "BaseBdev1", 00:19:03.485 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:03.485 "is_configured": true, 00:19:03.485 "data_offset": 2048, 00:19:03.485 "data_size": 63488 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "name": "BaseBdev2", 00:19:03.485 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:03.485 "is_configured": true, 00:19:03.485 "data_offset": 2048, 00:19:03.485 "data_size": 63488 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "name": "BaseBdev3", 00:19:03.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.485 "is_configured": false, 00:19:03.485 "data_offset": 0, 00:19:03.485 "data_size": 0 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "name": "BaseBdev4", 00:19:03.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.485 "is_configured": false, 00:19:03.485 "data_offset": 0, 00:19:03.485 "data_size": 0 00:19:03.485 } 00:19:03.485 ] 00:19:03.485 }' 00:19:03.485 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.485 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.113 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:04.113 [2024-07-25 12:00:50.156961] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.113 BaseBdev3 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:04.113 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.370 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:04.626 [ 00:19:04.626 { 00:19:04.626 "name": "BaseBdev3", 00:19:04.626 "aliases": [ 00:19:04.626 "9fc6b7d2-5c39-4a48-830c-8e677e9ed658" 00:19:04.626 ], 00:19:04.626 "product_name": "Malloc disk", 00:19:04.626 "block_size": 512, 00:19:04.626 "num_blocks": 65536, 00:19:04.626 "uuid": "9fc6b7d2-5c39-4a48-830c-8e677e9ed658", 00:19:04.626 "assigned_rate_limits": { 00:19:04.626 "rw_ios_per_sec": 0, 00:19:04.626 "rw_mbytes_per_sec": 0, 00:19:04.626 "r_mbytes_per_sec": 0, 00:19:04.626 "w_mbytes_per_sec": 0 00:19:04.626 }, 00:19:04.626 "claimed": true, 00:19:04.626 "claim_type": "exclusive_write", 00:19:04.626 "zoned": false, 00:19:04.626 "supported_io_types": { 00:19:04.626 "read": true, 00:19:04.626 "write": true, 00:19:04.626 "unmap": true, 00:19:04.626 "flush": true, 00:19:04.626 "reset": true, 00:19:04.626 "nvme_admin": false, 00:19:04.626 "nvme_io": false, 00:19:04.626 "nvme_io_md": false, 00:19:04.626 "write_zeroes": true, 00:19:04.626 "zcopy": true, 00:19:04.626 "get_zone_info": false, 00:19:04.626 "zone_management": false, 00:19:04.626 "zone_append": false, 00:19:04.626 "compare": false, 00:19:04.626 "compare_and_write": false, 00:19:04.626 "abort": true, 00:19:04.626 "seek_hole": false, 00:19:04.626 "seek_data": false, 00:19:04.626 "copy": true, 00:19:04.626 "nvme_iov_md": false 00:19:04.626 }, 00:19:04.626 "memory_domains": [ 00:19:04.626 { 00:19:04.626 "dma_device_id": "system", 00:19:04.626 "dma_device_type": 1 00:19:04.626 }, 00:19:04.626 { 00:19:04.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.626 "dma_device_type": 2 00:19:04.626 } 00:19:04.626 ], 00:19:04.626 "driver_specific": {} 00:19:04.626 } 00:19:04.626 ] 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.626 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.883 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.883 "name": "Existed_Raid", 00:19:04.883 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:04.883 "strip_size_kb": 64, 00:19:04.883 "state": "configuring", 00:19:04.883 "raid_level": "concat", 00:19:04.883 "superblock": true, 00:19:04.883 "num_base_bdevs": 4, 00:19:04.883 "num_base_bdevs_discovered": 3, 00:19:04.883 "num_base_bdevs_operational": 4, 00:19:04.883 "base_bdevs_list": [ 00:19:04.883 { 00:19:04.883 "name": "BaseBdev1", 00:19:04.883 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:04.883 "is_configured": true, 00:19:04.883 "data_offset": 2048, 00:19:04.883 "data_size": 63488 00:19:04.883 }, 00:19:04.883 { 00:19:04.883 "name": "BaseBdev2", 00:19:04.883 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:04.883 "is_configured": true, 00:19:04.883 "data_offset": 2048, 00:19:04.883 "data_size": 63488 00:19:04.883 }, 00:19:04.883 { 00:19:04.883 "name": "BaseBdev3", 00:19:04.883 "uuid": "9fc6b7d2-5c39-4a48-830c-8e677e9ed658", 00:19:04.883 "is_configured": true, 00:19:04.883 "data_offset": 2048, 00:19:04.883 "data_size": 63488 00:19:04.883 }, 00:19:04.883 { 00:19:04.883 "name": "BaseBdev4", 00:19:04.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.883 "is_configured": false, 00:19:04.883 "data_offset": 0, 00:19:04.883 "data_size": 0 00:19:04.883 } 00:19:04.883 ] 00:19:04.883 }' 00:19:04.883 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.883 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:05.813 [2024-07-25 12:00:51.788319] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:05.813 [2024-07-25 12:00:51.788467] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1465830 00:19:05.813 [2024-07-25 12:00:51.788481] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:05.813 [2024-07-25 12:00:51.788651] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x145c1e0 00:19:05.813 [2024-07-25 12:00:51.788762] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1465830 00:19:05.813 [2024-07-25 12:00:51.788771] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1465830 00:19:05.813 [2024-07-25 12:00:51.788858] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.813 BaseBdev4 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:05.813 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.121 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:06.121 [ 00:19:06.121 { 00:19:06.121 "name": "BaseBdev4", 00:19:06.121 "aliases": [ 00:19:06.121 "ac17994d-6a37-4853-9bf2-a460d7c4e1cd" 00:19:06.121 ], 00:19:06.121 "product_name": "Malloc disk", 00:19:06.121 "block_size": 512, 00:19:06.121 "num_blocks": 65536, 00:19:06.121 "uuid": "ac17994d-6a37-4853-9bf2-a460d7c4e1cd", 00:19:06.121 "assigned_rate_limits": { 00:19:06.121 "rw_ios_per_sec": 0, 00:19:06.121 "rw_mbytes_per_sec": 0, 00:19:06.121 "r_mbytes_per_sec": 0, 00:19:06.121 "w_mbytes_per_sec": 0 00:19:06.121 }, 00:19:06.121 "claimed": true, 00:19:06.121 "claim_type": "exclusive_write", 00:19:06.121 "zoned": false, 00:19:06.121 "supported_io_types": { 00:19:06.121 "read": true, 00:19:06.121 "write": true, 00:19:06.121 "unmap": true, 00:19:06.121 "flush": true, 00:19:06.121 "reset": true, 00:19:06.121 "nvme_admin": false, 00:19:06.121 "nvme_io": false, 00:19:06.121 "nvme_io_md": false, 00:19:06.121 "write_zeroes": true, 00:19:06.121 "zcopy": true, 00:19:06.121 "get_zone_info": false, 00:19:06.121 "zone_management": false, 00:19:06.121 "zone_append": false, 00:19:06.121 "compare": false, 00:19:06.121 "compare_and_write": false, 00:19:06.121 "abort": true, 00:19:06.121 "seek_hole": false, 00:19:06.121 "seek_data": false, 00:19:06.121 "copy": true, 00:19:06.121 "nvme_iov_md": false 00:19:06.121 }, 00:19:06.121 "memory_domains": [ 00:19:06.121 { 00:19:06.121 "dma_device_id": "system", 00:19:06.121 "dma_device_type": 1 00:19:06.121 }, 00:19:06.121 { 00:19:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.121 "dma_device_type": 2 00:19:06.121 } 00:19:06.121 ], 00:19:06.121 "driver_specific": {} 00:19:06.121 } 00:19:06.121 ] 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.121 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.377 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.377 "name": "Existed_Raid", 00:19:06.377 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:06.377 "strip_size_kb": 64, 00:19:06.377 "state": "online", 00:19:06.377 "raid_level": "concat", 00:19:06.377 "superblock": true, 00:19:06.377 "num_base_bdevs": 4, 00:19:06.377 "num_base_bdevs_discovered": 4, 00:19:06.377 "num_base_bdevs_operational": 4, 00:19:06.377 "base_bdevs_list": [ 00:19:06.377 { 00:19:06.377 "name": "BaseBdev1", 00:19:06.377 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:06.377 "is_configured": true, 00:19:06.377 "data_offset": 2048, 00:19:06.377 "data_size": 63488 00:19:06.377 }, 00:19:06.377 { 00:19:06.377 "name": "BaseBdev2", 00:19:06.377 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:06.377 "is_configured": true, 00:19:06.377 "data_offset": 2048, 00:19:06.377 "data_size": 63488 00:19:06.377 }, 00:19:06.377 { 00:19:06.377 "name": "BaseBdev3", 00:19:06.377 "uuid": "9fc6b7d2-5c39-4a48-830c-8e677e9ed658", 00:19:06.377 "is_configured": true, 00:19:06.377 "data_offset": 2048, 00:19:06.377 "data_size": 63488 00:19:06.377 }, 00:19:06.377 { 00:19:06.377 "name": "BaseBdev4", 00:19:06.377 "uuid": "ac17994d-6a37-4853-9bf2-a460d7c4e1cd", 00:19:06.377 "is_configured": true, 00:19:06.377 "data_offset": 2048, 00:19:06.378 "data_size": 63488 00:19:06.378 } 00:19:06.378 ] 00:19:06.378 }' 00:19:06.378 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.378 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:06.941 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:07.198 [2024-07-25 12:00:53.092117] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.198 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:07.198 "name": "Existed_Raid", 00:19:07.198 "aliases": [ 00:19:07.198 "5518969f-e449-4baa-aaad-25e14deac336" 00:19:07.198 ], 00:19:07.198 "product_name": "Raid Volume", 00:19:07.198 "block_size": 512, 00:19:07.198 "num_blocks": 253952, 00:19:07.198 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:07.198 "assigned_rate_limits": { 00:19:07.198 "rw_ios_per_sec": 0, 00:19:07.198 "rw_mbytes_per_sec": 0, 00:19:07.198 "r_mbytes_per_sec": 0, 00:19:07.198 "w_mbytes_per_sec": 0 00:19:07.198 }, 00:19:07.198 "claimed": false, 00:19:07.198 "zoned": false, 00:19:07.198 "supported_io_types": { 00:19:07.198 "read": true, 00:19:07.198 "write": true, 00:19:07.198 "unmap": true, 00:19:07.198 "flush": true, 00:19:07.198 "reset": true, 00:19:07.198 "nvme_admin": false, 00:19:07.198 "nvme_io": false, 00:19:07.198 "nvme_io_md": false, 00:19:07.198 "write_zeroes": true, 00:19:07.198 "zcopy": false, 00:19:07.198 "get_zone_info": false, 00:19:07.198 "zone_management": false, 00:19:07.198 "zone_append": false, 00:19:07.198 "compare": false, 00:19:07.198 "compare_and_write": false, 00:19:07.198 "abort": false, 00:19:07.198 "seek_hole": false, 00:19:07.198 "seek_data": false, 00:19:07.198 "copy": false, 00:19:07.198 "nvme_iov_md": false 00:19:07.198 }, 00:19:07.198 "memory_domains": [ 00:19:07.198 { 00:19:07.198 "dma_device_id": "system", 00:19:07.198 "dma_device_type": 1 00:19:07.198 }, 00:19:07.198 { 00:19:07.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.198 "dma_device_type": 2 00:19:07.198 }, 00:19:07.198 { 00:19:07.198 "dma_device_id": "system", 00:19:07.198 "dma_device_type": 1 00:19:07.198 }, 00:19:07.198 { 00:19:07.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.198 "dma_device_type": 2 00:19:07.198 }, 00:19:07.198 { 00:19:07.198 "dma_device_id": "system", 00:19:07.198 "dma_device_type": 1 00:19:07.198 }, 00:19:07.198 { 00:19:07.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.198 "dma_device_type": 2 00:19:07.199 }, 00:19:07.199 { 00:19:07.199 "dma_device_id": "system", 00:19:07.199 "dma_device_type": 1 00:19:07.199 }, 00:19:07.199 { 00:19:07.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.199 "dma_device_type": 2 00:19:07.199 } 00:19:07.199 ], 00:19:07.199 "driver_specific": { 00:19:07.199 "raid": { 00:19:07.199 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:07.199 "strip_size_kb": 64, 00:19:07.199 "state": "online", 00:19:07.199 "raid_level": "concat", 00:19:07.199 "superblock": true, 00:19:07.199 "num_base_bdevs": 4, 00:19:07.199 "num_base_bdevs_discovered": 4, 00:19:07.199 "num_base_bdevs_operational": 4, 00:19:07.199 "base_bdevs_list": [ 00:19:07.199 { 00:19:07.199 "name": "BaseBdev1", 00:19:07.199 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:07.199 "is_configured": true, 00:19:07.199 "data_offset": 2048, 00:19:07.199 "data_size": 63488 00:19:07.199 }, 00:19:07.199 { 00:19:07.199 "name": "BaseBdev2", 00:19:07.199 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:07.199 "is_configured": true, 00:19:07.199 "data_offset": 2048, 00:19:07.199 "data_size": 63488 00:19:07.199 }, 00:19:07.199 { 00:19:07.199 "name": "BaseBdev3", 00:19:07.199 "uuid": "9fc6b7d2-5c39-4a48-830c-8e677e9ed658", 00:19:07.199 "is_configured": true, 00:19:07.199 "data_offset": 2048, 00:19:07.199 "data_size": 63488 00:19:07.199 }, 00:19:07.199 { 00:19:07.199 "name": "BaseBdev4", 00:19:07.199 "uuid": "ac17994d-6a37-4853-9bf2-a460d7c4e1cd", 00:19:07.199 "is_configured": true, 00:19:07.199 "data_offset": 2048, 00:19:07.199 "data_size": 63488 00:19:07.199 } 00:19:07.199 ] 00:19:07.199 } 00:19:07.199 } 00:19:07.199 }' 00:19:07.199 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.199 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:07.199 BaseBdev2 00:19:07.199 BaseBdev3 00:19:07.199 BaseBdev4' 00:19:07.199 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.199 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:07.199 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.456 "name": "BaseBdev1", 00:19:07.456 "aliases": [ 00:19:07.456 "b9fd503d-272c-4de8-a1ed-3221b5af074f" 00:19:07.456 ], 00:19:07.456 "product_name": "Malloc disk", 00:19:07.456 "block_size": 512, 00:19:07.456 "num_blocks": 65536, 00:19:07.456 "uuid": "b9fd503d-272c-4de8-a1ed-3221b5af074f", 00:19:07.456 "assigned_rate_limits": { 00:19:07.456 "rw_ios_per_sec": 0, 00:19:07.456 "rw_mbytes_per_sec": 0, 00:19:07.456 "r_mbytes_per_sec": 0, 00:19:07.456 "w_mbytes_per_sec": 0 00:19:07.456 }, 00:19:07.456 "claimed": true, 00:19:07.456 "claim_type": "exclusive_write", 00:19:07.456 "zoned": false, 00:19:07.456 "supported_io_types": { 00:19:07.456 "read": true, 00:19:07.456 "write": true, 00:19:07.456 "unmap": true, 00:19:07.456 "flush": true, 00:19:07.456 "reset": true, 00:19:07.456 "nvme_admin": false, 00:19:07.456 "nvme_io": false, 00:19:07.456 "nvme_io_md": false, 00:19:07.456 "write_zeroes": true, 00:19:07.456 "zcopy": true, 00:19:07.456 "get_zone_info": false, 00:19:07.456 "zone_management": false, 00:19:07.456 "zone_append": false, 00:19:07.456 "compare": false, 00:19:07.456 "compare_and_write": false, 00:19:07.456 "abort": true, 00:19:07.456 "seek_hole": false, 00:19:07.456 "seek_data": false, 00:19:07.456 "copy": true, 00:19:07.456 "nvme_iov_md": false 00:19:07.456 }, 00:19:07.456 "memory_domains": [ 00:19:07.456 { 00:19:07.456 "dma_device_id": "system", 00:19:07.456 "dma_device_type": 1 00:19:07.456 }, 00:19:07.456 { 00:19:07.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.456 "dma_device_type": 2 00:19:07.456 } 00:19:07.456 ], 00:19:07.456 "driver_specific": {} 00:19:07.456 }' 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.456 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:07.713 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.970 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.970 "name": "BaseBdev2", 00:19:07.970 "aliases": [ 00:19:07.970 "f8fae47e-7ada-40c7-a89f-7832425a6281" 00:19:07.970 ], 00:19:07.970 "product_name": "Malloc disk", 00:19:07.970 "block_size": 512, 00:19:07.970 "num_blocks": 65536, 00:19:07.970 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:07.970 "assigned_rate_limits": { 00:19:07.970 "rw_ios_per_sec": 0, 00:19:07.970 "rw_mbytes_per_sec": 0, 00:19:07.970 "r_mbytes_per_sec": 0, 00:19:07.970 "w_mbytes_per_sec": 0 00:19:07.970 }, 00:19:07.970 "claimed": true, 00:19:07.970 "claim_type": "exclusive_write", 00:19:07.970 "zoned": false, 00:19:07.970 "supported_io_types": { 00:19:07.970 "read": true, 00:19:07.970 "write": true, 00:19:07.970 "unmap": true, 00:19:07.970 "flush": true, 00:19:07.970 "reset": true, 00:19:07.970 "nvme_admin": false, 00:19:07.970 "nvme_io": false, 00:19:07.970 "nvme_io_md": false, 00:19:07.970 "write_zeroes": true, 00:19:07.970 "zcopy": true, 00:19:07.970 "get_zone_info": false, 00:19:07.970 "zone_management": false, 00:19:07.970 "zone_append": false, 00:19:07.970 "compare": false, 00:19:07.970 "compare_and_write": false, 00:19:07.970 "abort": true, 00:19:07.970 "seek_hole": false, 00:19:07.970 "seek_data": false, 00:19:07.970 "copy": true, 00:19:07.970 "nvme_iov_md": false 00:19:07.970 }, 00:19:07.970 "memory_domains": [ 00:19:07.970 { 00:19:07.970 "dma_device_id": "system", 00:19:07.970 "dma_device_type": 1 00:19:07.970 }, 00:19:07.970 { 00:19:07.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.970 "dma_device_type": 2 00:19:07.970 } 00:19:07.970 ], 00:19:07.970 "driver_specific": {} 00:19:07.970 }' 00:19:07.970 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.970 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.970 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.970 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.970 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.970 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.970 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.970 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:08.226 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.482 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.482 "name": "BaseBdev3", 00:19:08.482 "aliases": [ 00:19:08.482 "9fc6b7d2-5c39-4a48-830c-8e677e9ed658" 00:19:08.482 ], 00:19:08.482 "product_name": "Malloc disk", 00:19:08.482 "block_size": 512, 00:19:08.482 "num_blocks": 65536, 00:19:08.482 "uuid": "9fc6b7d2-5c39-4a48-830c-8e677e9ed658", 00:19:08.482 "assigned_rate_limits": { 00:19:08.482 "rw_ios_per_sec": 0, 00:19:08.482 "rw_mbytes_per_sec": 0, 00:19:08.482 "r_mbytes_per_sec": 0, 00:19:08.482 "w_mbytes_per_sec": 0 00:19:08.482 }, 00:19:08.482 "claimed": true, 00:19:08.482 "claim_type": "exclusive_write", 00:19:08.482 "zoned": false, 00:19:08.482 "supported_io_types": { 00:19:08.482 "read": true, 00:19:08.482 "write": true, 00:19:08.482 "unmap": true, 00:19:08.482 "flush": true, 00:19:08.482 "reset": true, 00:19:08.482 "nvme_admin": false, 00:19:08.482 "nvme_io": false, 00:19:08.482 "nvme_io_md": false, 00:19:08.482 "write_zeroes": true, 00:19:08.482 "zcopy": true, 00:19:08.482 "get_zone_info": false, 00:19:08.482 "zone_management": false, 00:19:08.482 "zone_append": false, 00:19:08.482 "compare": false, 00:19:08.482 "compare_and_write": false, 00:19:08.482 "abort": true, 00:19:08.482 "seek_hole": false, 00:19:08.482 "seek_data": false, 00:19:08.482 "copy": true, 00:19:08.482 "nvme_iov_md": false 00:19:08.482 }, 00:19:08.482 "memory_domains": [ 00:19:08.482 { 00:19:08.482 "dma_device_id": "system", 00:19:08.482 "dma_device_type": 1 00:19:08.482 }, 00:19:08.482 { 00:19:08.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.482 "dma_device_type": 2 00:19:08.482 } 00:19:08.482 ], 00:19:08.482 "driver_specific": {} 00:19:08.482 }' 00:19:08.482 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.482 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.482 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.482 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.482 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.737 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.737 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.737 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.737 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.737 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.737 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.738 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.738 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.738 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:08.738 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.994 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.994 "name": "BaseBdev4", 00:19:08.994 "aliases": [ 00:19:08.994 "ac17994d-6a37-4853-9bf2-a460d7c4e1cd" 00:19:08.994 ], 00:19:08.994 "product_name": "Malloc disk", 00:19:08.994 "block_size": 512, 00:19:08.994 "num_blocks": 65536, 00:19:08.994 "uuid": "ac17994d-6a37-4853-9bf2-a460d7c4e1cd", 00:19:08.994 "assigned_rate_limits": { 00:19:08.994 "rw_ios_per_sec": 0, 00:19:08.994 "rw_mbytes_per_sec": 0, 00:19:08.994 "r_mbytes_per_sec": 0, 00:19:08.994 "w_mbytes_per_sec": 0 00:19:08.994 }, 00:19:08.994 "claimed": true, 00:19:08.994 "claim_type": "exclusive_write", 00:19:08.994 "zoned": false, 00:19:08.994 "supported_io_types": { 00:19:08.994 "read": true, 00:19:08.994 "write": true, 00:19:08.994 "unmap": true, 00:19:08.994 "flush": true, 00:19:08.994 "reset": true, 00:19:08.994 "nvme_admin": false, 00:19:08.994 "nvme_io": false, 00:19:08.994 "nvme_io_md": false, 00:19:08.994 "write_zeroes": true, 00:19:08.994 "zcopy": true, 00:19:08.994 "get_zone_info": false, 00:19:08.994 "zone_management": false, 00:19:08.994 "zone_append": false, 00:19:08.994 "compare": false, 00:19:08.994 "compare_and_write": false, 00:19:08.994 "abort": true, 00:19:08.994 "seek_hole": false, 00:19:08.994 "seek_data": false, 00:19:08.994 "copy": true, 00:19:08.994 "nvme_iov_md": false 00:19:08.994 }, 00:19:08.994 "memory_domains": [ 00:19:08.994 { 00:19:08.994 "dma_device_id": "system", 00:19:08.994 "dma_device_type": 1 00:19:08.994 }, 00:19:08.994 { 00:19:08.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.994 "dma_device_type": 2 00:19:08.994 } 00:19:08.994 ], 00:19:08.994 "driver_specific": {} 00:19:08.994 }' 00:19:08.994 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.994 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.994 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.994 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:09.253 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:09.509 [2024-07-25 12:00:55.522281] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.509 [2024-07-25 12:00:55.522305] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.509 [2024-07-25 12:00:55.522349] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:09.509 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.510 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.766 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.766 "name": "Existed_Raid", 00:19:09.766 "uuid": "5518969f-e449-4baa-aaad-25e14deac336", 00:19:09.766 "strip_size_kb": 64, 00:19:09.766 "state": "offline", 00:19:09.766 "raid_level": "concat", 00:19:09.766 "superblock": true, 00:19:09.766 "num_base_bdevs": 4, 00:19:09.766 "num_base_bdevs_discovered": 3, 00:19:09.766 "num_base_bdevs_operational": 3, 00:19:09.766 "base_bdevs_list": [ 00:19:09.766 { 00:19:09.766 "name": null, 00:19:09.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.766 "is_configured": false, 00:19:09.766 "data_offset": 2048, 00:19:09.766 "data_size": 63488 00:19:09.766 }, 00:19:09.766 { 00:19:09.766 "name": "BaseBdev2", 00:19:09.766 "uuid": "f8fae47e-7ada-40c7-a89f-7832425a6281", 00:19:09.766 "is_configured": true, 00:19:09.766 "data_offset": 2048, 00:19:09.766 "data_size": 63488 00:19:09.766 }, 00:19:09.766 { 00:19:09.766 "name": "BaseBdev3", 00:19:09.766 "uuid": "9fc6b7d2-5c39-4a48-830c-8e677e9ed658", 00:19:09.766 "is_configured": true, 00:19:09.766 "data_offset": 2048, 00:19:09.766 "data_size": 63488 00:19:09.766 }, 00:19:09.766 { 00:19:09.766 "name": "BaseBdev4", 00:19:09.766 "uuid": "ac17994d-6a37-4853-9bf2-a460d7c4e1cd", 00:19:09.766 "is_configured": true, 00:19:09.766 "data_offset": 2048, 00:19:09.766 "data_size": 63488 00:19:09.766 } 00:19:09.766 ] 00:19:09.766 }' 00:19:09.766 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.766 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.333 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:10.333 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.333 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.333 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:10.590 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:10.590 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.590 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:10.847 [2024-07-25 12:00:56.790742] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.847 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:10.847 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.847 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.847 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:11.104 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:11.104 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.104 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:11.365 [2024-07-25 12:00:57.261901] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:11.365 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.365 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.365 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.365 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:11.620 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:11.620 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.620 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:11.620 [2024-07-25 12:00:57.725203] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:11.620 [2024-07-25 12:00:57.725239] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1465830 name Existed_Raid, state offline 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:11.878 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:12.143 BaseBdev2 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:12.144 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.401 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:12.658 [ 00:19:12.658 { 00:19:12.658 "name": "BaseBdev2", 00:19:12.658 "aliases": [ 00:19:12.658 "79d07638-db3f-4522-a167-e951a50c3f98" 00:19:12.658 ], 00:19:12.658 "product_name": "Malloc disk", 00:19:12.658 "block_size": 512, 00:19:12.658 "num_blocks": 65536, 00:19:12.658 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:12.658 "assigned_rate_limits": { 00:19:12.658 "rw_ios_per_sec": 0, 00:19:12.658 "rw_mbytes_per_sec": 0, 00:19:12.658 "r_mbytes_per_sec": 0, 00:19:12.658 "w_mbytes_per_sec": 0 00:19:12.658 }, 00:19:12.658 "claimed": false, 00:19:12.658 "zoned": false, 00:19:12.658 "supported_io_types": { 00:19:12.658 "read": true, 00:19:12.658 "write": true, 00:19:12.658 "unmap": true, 00:19:12.658 "flush": true, 00:19:12.658 "reset": true, 00:19:12.658 "nvme_admin": false, 00:19:12.658 "nvme_io": false, 00:19:12.658 "nvme_io_md": false, 00:19:12.658 "write_zeroes": true, 00:19:12.658 "zcopy": true, 00:19:12.658 "get_zone_info": false, 00:19:12.658 "zone_management": false, 00:19:12.658 "zone_append": false, 00:19:12.658 "compare": false, 00:19:12.658 "compare_and_write": false, 00:19:12.658 "abort": true, 00:19:12.658 "seek_hole": false, 00:19:12.658 "seek_data": false, 00:19:12.658 "copy": true, 00:19:12.658 "nvme_iov_md": false 00:19:12.658 }, 00:19:12.658 "memory_domains": [ 00:19:12.658 { 00:19:12.658 "dma_device_id": "system", 00:19:12.658 "dma_device_type": 1 00:19:12.658 }, 00:19:12.658 { 00:19:12.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.658 "dma_device_type": 2 00:19:12.658 } 00:19:12.658 ], 00:19:12.658 "driver_specific": {} 00:19:12.658 } 00:19:12.658 ] 00:19:12.658 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:12.658 12:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:12.658 12:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.658 12:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:12.915 BaseBdev3 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:12.915 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.172 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:13.429 [ 00:19:13.429 { 00:19:13.429 "name": "BaseBdev3", 00:19:13.429 "aliases": [ 00:19:13.429 "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b" 00:19:13.429 ], 00:19:13.429 "product_name": "Malloc disk", 00:19:13.429 "block_size": 512, 00:19:13.429 "num_blocks": 65536, 00:19:13.429 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:13.429 "assigned_rate_limits": { 00:19:13.429 "rw_ios_per_sec": 0, 00:19:13.429 "rw_mbytes_per_sec": 0, 00:19:13.429 "r_mbytes_per_sec": 0, 00:19:13.429 "w_mbytes_per_sec": 0 00:19:13.429 }, 00:19:13.429 "claimed": false, 00:19:13.429 "zoned": false, 00:19:13.429 "supported_io_types": { 00:19:13.429 "read": true, 00:19:13.429 "write": true, 00:19:13.429 "unmap": true, 00:19:13.429 "flush": true, 00:19:13.429 "reset": true, 00:19:13.429 "nvme_admin": false, 00:19:13.429 "nvme_io": false, 00:19:13.429 "nvme_io_md": false, 00:19:13.429 "write_zeroes": true, 00:19:13.429 "zcopy": true, 00:19:13.429 "get_zone_info": false, 00:19:13.429 "zone_management": false, 00:19:13.429 "zone_append": false, 00:19:13.429 "compare": false, 00:19:13.429 "compare_and_write": false, 00:19:13.429 "abort": true, 00:19:13.429 "seek_hole": false, 00:19:13.429 "seek_data": false, 00:19:13.429 "copy": true, 00:19:13.429 "nvme_iov_md": false 00:19:13.429 }, 00:19:13.429 "memory_domains": [ 00:19:13.429 { 00:19:13.429 "dma_device_id": "system", 00:19:13.429 "dma_device_type": 1 00:19:13.429 }, 00:19:13.429 { 00:19:13.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.429 "dma_device_type": 2 00:19:13.429 } 00:19:13.429 ], 00:19:13.429 "driver_specific": {} 00:19:13.429 } 00:19:13.429 ] 00:19:13.429 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:13.429 12:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:13.429 12:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.429 12:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:13.685 BaseBdev4 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.685 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:13.942 [ 00:19:13.942 { 00:19:13.942 "name": "BaseBdev4", 00:19:13.942 "aliases": [ 00:19:13.942 "ff96fd86-c127-4484-ad80-e868e38b2711" 00:19:13.942 ], 00:19:13.942 "product_name": "Malloc disk", 00:19:13.942 "block_size": 512, 00:19:13.942 "num_blocks": 65536, 00:19:13.942 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:13.942 "assigned_rate_limits": { 00:19:13.942 "rw_ios_per_sec": 0, 00:19:13.942 "rw_mbytes_per_sec": 0, 00:19:13.942 "r_mbytes_per_sec": 0, 00:19:13.942 "w_mbytes_per_sec": 0 00:19:13.942 }, 00:19:13.942 "claimed": false, 00:19:13.942 "zoned": false, 00:19:13.942 "supported_io_types": { 00:19:13.942 "read": true, 00:19:13.942 "write": true, 00:19:13.942 "unmap": true, 00:19:13.942 "flush": true, 00:19:13.942 "reset": true, 00:19:13.942 "nvme_admin": false, 00:19:13.942 "nvme_io": false, 00:19:13.942 "nvme_io_md": false, 00:19:13.942 "write_zeroes": true, 00:19:13.942 "zcopy": true, 00:19:13.942 "get_zone_info": false, 00:19:13.942 "zone_management": false, 00:19:13.942 "zone_append": false, 00:19:13.942 "compare": false, 00:19:13.942 "compare_and_write": false, 00:19:13.942 "abort": true, 00:19:13.942 "seek_hole": false, 00:19:13.942 "seek_data": false, 00:19:13.942 "copy": true, 00:19:13.942 "nvme_iov_md": false 00:19:13.942 }, 00:19:13.942 "memory_domains": [ 00:19:13.942 { 00:19:13.942 "dma_device_id": "system", 00:19:13.942 "dma_device_type": 1 00:19:13.942 }, 00:19:13.942 { 00:19:13.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.942 "dma_device_type": 2 00:19:13.942 } 00:19:13.942 ], 00:19:13.942 "driver_specific": {} 00:19:13.942 } 00:19:13.942 ] 00:19:13.942 12:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:13.942 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:13.942 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.942 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.198 [2024-07-25 12:01:00.222586] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.198 [2024-07-25 12:01:00.222623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.199 [2024-07-25 12:01:00.222641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.199 [2024-07-25 12:01:00.224022] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.199 [2024-07-25 12:01:00.224071] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.199 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.455 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.455 "name": "Existed_Raid", 00:19:14.455 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:14.455 "strip_size_kb": 64, 00:19:14.455 "state": "configuring", 00:19:14.455 "raid_level": "concat", 00:19:14.455 "superblock": true, 00:19:14.455 "num_base_bdevs": 4, 00:19:14.455 "num_base_bdevs_discovered": 3, 00:19:14.455 "num_base_bdevs_operational": 4, 00:19:14.455 "base_bdevs_list": [ 00:19:14.455 { 00:19:14.455 "name": "BaseBdev1", 00:19:14.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.455 "is_configured": false, 00:19:14.455 "data_offset": 0, 00:19:14.455 "data_size": 0 00:19:14.455 }, 00:19:14.455 { 00:19:14.455 "name": "BaseBdev2", 00:19:14.455 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:14.455 "is_configured": true, 00:19:14.455 "data_offset": 2048, 00:19:14.456 "data_size": 63488 00:19:14.456 }, 00:19:14.456 { 00:19:14.456 "name": "BaseBdev3", 00:19:14.456 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:14.456 "is_configured": true, 00:19:14.456 "data_offset": 2048, 00:19:14.456 "data_size": 63488 00:19:14.456 }, 00:19:14.456 { 00:19:14.456 "name": "BaseBdev4", 00:19:14.456 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:14.456 "is_configured": true, 00:19:14.456 "data_offset": 2048, 00:19:14.456 "data_size": 63488 00:19:14.456 } 00:19:14.456 ] 00:19:14.456 }' 00:19:14.456 12:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.456 12:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.019 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:15.277 [2024-07-25 12:01:01.233212] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.277 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.534 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.534 "name": "Existed_Raid", 00:19:15.534 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:15.534 "strip_size_kb": 64, 00:19:15.534 "state": "configuring", 00:19:15.534 "raid_level": "concat", 00:19:15.534 "superblock": true, 00:19:15.534 "num_base_bdevs": 4, 00:19:15.534 "num_base_bdevs_discovered": 2, 00:19:15.534 "num_base_bdevs_operational": 4, 00:19:15.534 "base_bdevs_list": [ 00:19:15.534 { 00:19:15.534 "name": "BaseBdev1", 00:19:15.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.534 "is_configured": false, 00:19:15.534 "data_offset": 0, 00:19:15.534 "data_size": 0 00:19:15.534 }, 00:19:15.534 { 00:19:15.534 "name": null, 00:19:15.534 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:15.534 "is_configured": false, 00:19:15.534 "data_offset": 2048, 00:19:15.534 "data_size": 63488 00:19:15.534 }, 00:19:15.534 { 00:19:15.534 "name": "BaseBdev3", 00:19:15.534 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:15.534 "is_configured": true, 00:19:15.534 "data_offset": 2048, 00:19:15.534 "data_size": 63488 00:19:15.534 }, 00:19:15.534 { 00:19:15.534 "name": "BaseBdev4", 00:19:15.534 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:15.534 "is_configured": true, 00:19:15.534 "data_offset": 2048, 00:19:15.534 "data_size": 63488 00:19:15.534 } 00:19:15.534 ] 00:19:15.534 }' 00:19:15.534 12:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.534 12:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.096 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.096 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:16.352 [2024-07-25 12:01:02.451547] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.352 BaseBdev1 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:16.352 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.610 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:16.894 [ 00:19:16.895 { 00:19:16.895 "name": "BaseBdev1", 00:19:16.895 "aliases": [ 00:19:16.895 "694641b9-9a6e-4f43-a5ca-4cdb19486393" 00:19:16.895 ], 00:19:16.895 "product_name": "Malloc disk", 00:19:16.895 "block_size": 512, 00:19:16.895 "num_blocks": 65536, 00:19:16.895 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:16.895 "assigned_rate_limits": { 00:19:16.895 "rw_ios_per_sec": 0, 00:19:16.895 "rw_mbytes_per_sec": 0, 00:19:16.895 "r_mbytes_per_sec": 0, 00:19:16.895 "w_mbytes_per_sec": 0 00:19:16.895 }, 00:19:16.895 "claimed": true, 00:19:16.895 "claim_type": "exclusive_write", 00:19:16.895 "zoned": false, 00:19:16.895 "supported_io_types": { 00:19:16.895 "read": true, 00:19:16.895 "write": true, 00:19:16.895 "unmap": true, 00:19:16.895 "flush": true, 00:19:16.895 "reset": true, 00:19:16.895 "nvme_admin": false, 00:19:16.895 "nvme_io": false, 00:19:16.895 "nvme_io_md": false, 00:19:16.895 "write_zeroes": true, 00:19:16.895 "zcopy": true, 00:19:16.895 "get_zone_info": false, 00:19:16.895 "zone_management": false, 00:19:16.895 "zone_append": false, 00:19:16.895 "compare": false, 00:19:16.895 "compare_and_write": false, 00:19:16.895 "abort": true, 00:19:16.895 "seek_hole": false, 00:19:16.895 "seek_data": false, 00:19:16.895 "copy": true, 00:19:16.895 "nvme_iov_md": false 00:19:16.895 }, 00:19:16.895 "memory_domains": [ 00:19:16.895 { 00:19:16.895 "dma_device_id": "system", 00:19:16.895 "dma_device_type": 1 00:19:16.895 }, 00:19:16.895 { 00:19:16.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.895 "dma_device_type": 2 00:19:16.895 } 00:19:16.895 ], 00:19:16.895 "driver_specific": {} 00:19:16.895 } 00:19:16.895 ] 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.895 12:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.164 12:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.164 "name": "Existed_Raid", 00:19:17.164 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:17.164 "strip_size_kb": 64, 00:19:17.164 "state": "configuring", 00:19:17.164 "raid_level": "concat", 00:19:17.164 "superblock": true, 00:19:17.164 "num_base_bdevs": 4, 00:19:17.164 "num_base_bdevs_discovered": 3, 00:19:17.164 "num_base_bdevs_operational": 4, 00:19:17.164 "base_bdevs_list": [ 00:19:17.164 { 00:19:17.164 "name": "BaseBdev1", 00:19:17.164 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:17.164 "is_configured": true, 00:19:17.164 "data_offset": 2048, 00:19:17.164 "data_size": 63488 00:19:17.164 }, 00:19:17.164 { 00:19:17.164 "name": null, 00:19:17.164 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:17.164 "is_configured": false, 00:19:17.164 "data_offset": 2048, 00:19:17.164 "data_size": 63488 00:19:17.164 }, 00:19:17.164 { 00:19:17.164 "name": "BaseBdev3", 00:19:17.164 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:17.164 "is_configured": true, 00:19:17.164 "data_offset": 2048, 00:19:17.164 "data_size": 63488 00:19:17.164 }, 00:19:17.164 { 00:19:17.164 "name": "BaseBdev4", 00:19:17.164 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:17.164 "is_configured": true, 00:19:17.164 "data_offset": 2048, 00:19:17.164 "data_size": 63488 00:19:17.164 } 00:19:17.165 ] 00:19:17.165 }' 00:19:17.165 12:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.165 12:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.727 12:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.727 12:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:17.984 12:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:17.984 12:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:18.241 [2024-07-25 12:01:04.144038] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.241 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.498 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.498 "name": "Existed_Raid", 00:19:18.498 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:18.498 "strip_size_kb": 64, 00:19:18.498 "state": "configuring", 00:19:18.498 "raid_level": "concat", 00:19:18.498 "superblock": true, 00:19:18.498 "num_base_bdevs": 4, 00:19:18.498 "num_base_bdevs_discovered": 2, 00:19:18.498 "num_base_bdevs_operational": 4, 00:19:18.498 "base_bdevs_list": [ 00:19:18.498 { 00:19:18.498 "name": "BaseBdev1", 00:19:18.498 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:18.498 "is_configured": true, 00:19:18.498 "data_offset": 2048, 00:19:18.498 "data_size": 63488 00:19:18.498 }, 00:19:18.498 { 00:19:18.498 "name": null, 00:19:18.498 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:18.498 "is_configured": false, 00:19:18.498 "data_offset": 2048, 00:19:18.498 "data_size": 63488 00:19:18.498 }, 00:19:18.498 { 00:19:18.498 "name": null, 00:19:18.498 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:18.498 "is_configured": false, 00:19:18.498 "data_offset": 2048, 00:19:18.498 "data_size": 63488 00:19:18.498 }, 00:19:18.498 { 00:19:18.498 "name": "BaseBdev4", 00:19:18.498 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:18.498 "is_configured": true, 00:19:18.498 "data_offset": 2048, 00:19:18.498 "data_size": 63488 00:19:18.498 } 00:19:18.498 ] 00:19:18.498 }' 00:19:18.498 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.498 12:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.062 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.062 12:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:19.319 [2024-07-25 12:01:05.399371] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.319 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.585 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.585 "name": "Existed_Raid", 00:19:19.585 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:19.585 "strip_size_kb": 64, 00:19:19.585 "state": "configuring", 00:19:19.585 "raid_level": "concat", 00:19:19.585 "superblock": true, 00:19:19.585 "num_base_bdevs": 4, 00:19:19.585 "num_base_bdevs_discovered": 3, 00:19:19.586 "num_base_bdevs_operational": 4, 00:19:19.586 "base_bdevs_list": [ 00:19:19.586 { 00:19:19.586 "name": "BaseBdev1", 00:19:19.586 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:19.586 "is_configured": true, 00:19:19.586 "data_offset": 2048, 00:19:19.586 "data_size": 63488 00:19:19.586 }, 00:19:19.586 { 00:19:19.586 "name": null, 00:19:19.586 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:19.586 "is_configured": false, 00:19:19.586 "data_offset": 2048, 00:19:19.586 "data_size": 63488 00:19:19.586 }, 00:19:19.586 { 00:19:19.586 "name": "BaseBdev3", 00:19:19.586 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:19.586 "is_configured": true, 00:19:19.586 "data_offset": 2048, 00:19:19.586 "data_size": 63488 00:19:19.586 }, 00:19:19.586 { 00:19:19.586 "name": "BaseBdev4", 00:19:19.586 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:19.586 "is_configured": true, 00:19:19.586 "data_offset": 2048, 00:19:19.586 "data_size": 63488 00:19:19.586 } 00:19:19.586 ] 00:19:19.586 }' 00:19:19.586 12:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.586 12:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.149 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.149 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:20.405 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:20.405 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:20.662 [2024-07-25 12:01:06.682763] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.662 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.920 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.920 "name": "Existed_Raid", 00:19:20.920 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:20.920 "strip_size_kb": 64, 00:19:20.920 "state": "configuring", 00:19:20.920 "raid_level": "concat", 00:19:20.920 "superblock": true, 00:19:20.920 "num_base_bdevs": 4, 00:19:20.920 "num_base_bdevs_discovered": 2, 00:19:20.920 "num_base_bdevs_operational": 4, 00:19:20.920 "base_bdevs_list": [ 00:19:20.920 { 00:19:20.920 "name": null, 00:19:20.920 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:20.920 "is_configured": false, 00:19:20.920 "data_offset": 2048, 00:19:20.920 "data_size": 63488 00:19:20.920 }, 00:19:20.920 { 00:19:20.920 "name": null, 00:19:20.920 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:20.920 "is_configured": false, 00:19:20.920 "data_offset": 2048, 00:19:20.920 "data_size": 63488 00:19:20.920 }, 00:19:20.920 { 00:19:20.920 "name": "BaseBdev3", 00:19:20.920 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:20.920 "is_configured": true, 00:19:20.920 "data_offset": 2048, 00:19:20.920 "data_size": 63488 00:19:20.920 }, 00:19:20.920 { 00:19:20.920 "name": "BaseBdev4", 00:19:20.920 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:20.920 "is_configured": true, 00:19:20.920 "data_offset": 2048, 00:19:20.920 "data_size": 63488 00:19:20.920 } 00:19:20.920 ] 00:19:20.920 }' 00:19:20.920 12:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.920 12:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.484 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.484 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:21.742 [2024-07-25 12:01:07.823740] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.742 12:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.998 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.998 "name": "Existed_Raid", 00:19:21.998 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:21.998 "strip_size_kb": 64, 00:19:21.998 "state": "configuring", 00:19:21.998 "raid_level": "concat", 00:19:21.998 "superblock": true, 00:19:21.998 "num_base_bdevs": 4, 00:19:21.998 "num_base_bdevs_discovered": 3, 00:19:21.998 "num_base_bdevs_operational": 4, 00:19:21.998 "base_bdevs_list": [ 00:19:21.998 { 00:19:21.998 "name": null, 00:19:21.998 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:21.998 "is_configured": false, 00:19:21.998 "data_offset": 2048, 00:19:21.998 "data_size": 63488 00:19:21.998 }, 00:19:21.998 { 00:19:21.998 "name": "BaseBdev2", 00:19:21.998 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:21.998 "is_configured": true, 00:19:21.998 "data_offset": 2048, 00:19:21.998 "data_size": 63488 00:19:21.998 }, 00:19:21.998 { 00:19:21.998 "name": "BaseBdev3", 00:19:21.998 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:21.998 "is_configured": true, 00:19:21.998 "data_offset": 2048, 00:19:21.998 "data_size": 63488 00:19:21.998 }, 00:19:21.998 { 00:19:21.998 "name": "BaseBdev4", 00:19:21.998 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:21.998 "is_configured": true, 00:19:21.998 "data_offset": 2048, 00:19:21.998 "data_size": 63488 00:19:21.998 } 00:19:21.998 ] 00:19:21.998 }' 00:19:21.998 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.998 12:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.560 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.560 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.817 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:22.817 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.817 12:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 694641b9-9a6e-4f43-a5ca-4cdb19486393 00:19:23.075 [2024-07-25 12:01:09.170349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:23.075 [2024-07-25 12:01:09.170488] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1465530 00:19:23.075 [2024-07-25 12:01:09.170501] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:23.075 [2024-07-25 12:01:09.170654] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14619f0 00:19:23.075 [2024-07-25 12:01:09.170758] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1465530 00:19:23.075 [2024-07-25 12:01:09.170767] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1465530 00:19:23.075 [2024-07-25 12:01:09.170847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.075 NewBaseBdev 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:23.075 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.331 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:23.588 [ 00:19:23.588 { 00:19:23.588 "name": "NewBaseBdev", 00:19:23.588 "aliases": [ 00:19:23.588 "694641b9-9a6e-4f43-a5ca-4cdb19486393" 00:19:23.588 ], 00:19:23.588 "product_name": "Malloc disk", 00:19:23.588 "block_size": 512, 00:19:23.588 "num_blocks": 65536, 00:19:23.588 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:23.588 "assigned_rate_limits": { 00:19:23.588 "rw_ios_per_sec": 0, 00:19:23.588 "rw_mbytes_per_sec": 0, 00:19:23.588 "r_mbytes_per_sec": 0, 00:19:23.588 "w_mbytes_per_sec": 0 00:19:23.588 }, 00:19:23.588 "claimed": true, 00:19:23.588 "claim_type": "exclusive_write", 00:19:23.588 "zoned": false, 00:19:23.588 "supported_io_types": { 00:19:23.588 "read": true, 00:19:23.588 "write": true, 00:19:23.588 "unmap": true, 00:19:23.588 "flush": true, 00:19:23.588 "reset": true, 00:19:23.588 "nvme_admin": false, 00:19:23.588 "nvme_io": false, 00:19:23.588 "nvme_io_md": false, 00:19:23.588 "write_zeroes": true, 00:19:23.588 "zcopy": true, 00:19:23.588 "get_zone_info": false, 00:19:23.588 "zone_management": false, 00:19:23.588 "zone_append": false, 00:19:23.588 "compare": false, 00:19:23.588 "compare_and_write": false, 00:19:23.588 "abort": true, 00:19:23.588 "seek_hole": false, 00:19:23.588 "seek_data": false, 00:19:23.588 "copy": true, 00:19:23.588 "nvme_iov_md": false 00:19:23.588 }, 00:19:23.588 "memory_domains": [ 00:19:23.588 { 00:19:23.588 "dma_device_id": "system", 00:19:23.588 "dma_device_type": 1 00:19:23.588 }, 00:19:23.588 { 00:19:23.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.588 "dma_device_type": 2 00:19:23.588 } 00:19:23.588 ], 00:19:23.588 "driver_specific": {} 00:19:23.588 } 00:19:23.588 ] 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.588 12:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.151 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.151 "name": "Existed_Raid", 00:19:24.151 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:24.151 "strip_size_kb": 64, 00:19:24.151 "state": "online", 00:19:24.151 "raid_level": "concat", 00:19:24.151 "superblock": true, 00:19:24.151 "num_base_bdevs": 4, 00:19:24.151 "num_base_bdevs_discovered": 4, 00:19:24.151 "num_base_bdevs_operational": 4, 00:19:24.151 "base_bdevs_list": [ 00:19:24.151 { 00:19:24.151 "name": "NewBaseBdev", 00:19:24.151 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:24.151 "is_configured": true, 00:19:24.151 "data_offset": 2048, 00:19:24.151 "data_size": 63488 00:19:24.151 }, 00:19:24.151 { 00:19:24.151 "name": "BaseBdev2", 00:19:24.151 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:24.151 "is_configured": true, 00:19:24.151 "data_offset": 2048, 00:19:24.151 "data_size": 63488 00:19:24.151 }, 00:19:24.151 { 00:19:24.151 "name": "BaseBdev3", 00:19:24.151 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:24.151 "is_configured": true, 00:19:24.151 "data_offset": 2048, 00:19:24.152 "data_size": 63488 00:19:24.152 }, 00:19:24.152 { 00:19:24.152 "name": "BaseBdev4", 00:19:24.152 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:24.152 "is_configured": true, 00:19:24.152 "data_offset": 2048, 00:19:24.152 "data_size": 63488 00:19:24.152 } 00:19:24.152 ] 00:19:24.152 }' 00:19:24.152 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.152 12:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.715 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:24.715 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:24.715 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:24.716 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:24.716 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:24.716 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:24.716 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:24.716 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:24.972 [2024-07-25 12:01:10.843078] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.972 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:24.972 "name": "Existed_Raid", 00:19:24.972 "aliases": [ 00:19:24.972 "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8" 00:19:24.972 ], 00:19:24.972 "product_name": "Raid Volume", 00:19:24.972 "block_size": 512, 00:19:24.972 "num_blocks": 253952, 00:19:24.972 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:24.972 "assigned_rate_limits": { 00:19:24.972 "rw_ios_per_sec": 0, 00:19:24.972 "rw_mbytes_per_sec": 0, 00:19:24.972 "r_mbytes_per_sec": 0, 00:19:24.972 "w_mbytes_per_sec": 0 00:19:24.972 }, 00:19:24.972 "claimed": false, 00:19:24.972 "zoned": false, 00:19:24.972 "supported_io_types": { 00:19:24.972 "read": true, 00:19:24.972 "write": true, 00:19:24.972 "unmap": true, 00:19:24.972 "flush": true, 00:19:24.972 "reset": true, 00:19:24.972 "nvme_admin": false, 00:19:24.972 "nvme_io": false, 00:19:24.972 "nvme_io_md": false, 00:19:24.972 "write_zeroes": true, 00:19:24.972 "zcopy": false, 00:19:24.972 "get_zone_info": false, 00:19:24.972 "zone_management": false, 00:19:24.972 "zone_append": false, 00:19:24.972 "compare": false, 00:19:24.972 "compare_and_write": false, 00:19:24.972 "abort": false, 00:19:24.972 "seek_hole": false, 00:19:24.972 "seek_data": false, 00:19:24.972 "copy": false, 00:19:24.973 "nvme_iov_md": false 00:19:24.973 }, 00:19:24.973 "memory_domains": [ 00:19:24.973 { 00:19:24.973 "dma_device_id": "system", 00:19:24.973 "dma_device_type": 1 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.973 "dma_device_type": 2 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "system", 00:19:24.973 "dma_device_type": 1 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.973 "dma_device_type": 2 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "system", 00:19:24.973 "dma_device_type": 1 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.973 "dma_device_type": 2 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "system", 00:19:24.973 "dma_device_type": 1 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.973 "dma_device_type": 2 00:19:24.973 } 00:19:24.973 ], 00:19:24.973 "driver_specific": { 00:19:24.973 "raid": { 00:19:24.973 "uuid": "4e47dacf-8dd5-4a60-872c-3d0bbb8bb2f8", 00:19:24.973 "strip_size_kb": 64, 00:19:24.973 "state": "online", 00:19:24.973 "raid_level": "concat", 00:19:24.973 "superblock": true, 00:19:24.973 "num_base_bdevs": 4, 00:19:24.973 "num_base_bdevs_discovered": 4, 00:19:24.973 "num_base_bdevs_operational": 4, 00:19:24.973 "base_bdevs_list": [ 00:19:24.973 { 00:19:24.973 "name": "NewBaseBdev", 00:19:24.973 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:24.973 "is_configured": true, 00:19:24.973 "data_offset": 2048, 00:19:24.973 "data_size": 63488 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "name": "BaseBdev2", 00:19:24.973 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:24.973 "is_configured": true, 00:19:24.973 "data_offset": 2048, 00:19:24.973 "data_size": 63488 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "name": "BaseBdev3", 00:19:24.973 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:24.973 "is_configured": true, 00:19:24.973 "data_offset": 2048, 00:19:24.973 "data_size": 63488 00:19:24.973 }, 00:19:24.973 { 00:19:24.973 "name": "BaseBdev4", 00:19:24.973 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:24.973 "is_configured": true, 00:19:24.973 "data_offset": 2048, 00:19:24.973 "data_size": 63488 00:19:24.973 } 00:19:24.973 ] 00:19:24.973 } 00:19:24.973 } 00:19:24.973 }' 00:19:24.973 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.973 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:24.973 BaseBdev2 00:19:24.973 BaseBdev3 00:19:24.973 BaseBdev4' 00:19:24.973 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.973 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:24.973 12:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.229 "name": "NewBaseBdev", 00:19:25.229 "aliases": [ 00:19:25.229 "694641b9-9a6e-4f43-a5ca-4cdb19486393" 00:19:25.229 ], 00:19:25.229 "product_name": "Malloc disk", 00:19:25.229 "block_size": 512, 00:19:25.229 "num_blocks": 65536, 00:19:25.229 "uuid": "694641b9-9a6e-4f43-a5ca-4cdb19486393", 00:19:25.229 "assigned_rate_limits": { 00:19:25.229 "rw_ios_per_sec": 0, 00:19:25.229 "rw_mbytes_per_sec": 0, 00:19:25.229 "r_mbytes_per_sec": 0, 00:19:25.229 "w_mbytes_per_sec": 0 00:19:25.229 }, 00:19:25.229 "claimed": true, 00:19:25.229 "claim_type": "exclusive_write", 00:19:25.229 "zoned": false, 00:19:25.229 "supported_io_types": { 00:19:25.229 "read": true, 00:19:25.229 "write": true, 00:19:25.229 "unmap": true, 00:19:25.229 "flush": true, 00:19:25.229 "reset": true, 00:19:25.229 "nvme_admin": false, 00:19:25.229 "nvme_io": false, 00:19:25.229 "nvme_io_md": false, 00:19:25.229 "write_zeroes": true, 00:19:25.229 "zcopy": true, 00:19:25.229 "get_zone_info": false, 00:19:25.229 "zone_management": false, 00:19:25.229 "zone_append": false, 00:19:25.229 "compare": false, 00:19:25.229 "compare_and_write": false, 00:19:25.229 "abort": true, 00:19:25.229 "seek_hole": false, 00:19:25.229 "seek_data": false, 00:19:25.229 "copy": true, 00:19:25.229 "nvme_iov_md": false 00:19:25.229 }, 00:19:25.229 "memory_domains": [ 00:19:25.229 { 00:19:25.229 "dma_device_id": "system", 00:19:25.229 "dma_device_type": 1 00:19:25.229 }, 00:19:25.229 { 00:19:25.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.229 "dma_device_type": 2 00:19:25.229 } 00:19:25.229 ], 00:19:25.229 "driver_specific": {} 00:19:25.229 }' 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.229 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:25.485 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.742 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.742 "name": "BaseBdev2", 00:19:25.742 "aliases": [ 00:19:25.742 "79d07638-db3f-4522-a167-e951a50c3f98" 00:19:25.742 ], 00:19:25.742 "product_name": "Malloc disk", 00:19:25.742 "block_size": 512, 00:19:25.742 "num_blocks": 65536, 00:19:25.742 "uuid": "79d07638-db3f-4522-a167-e951a50c3f98", 00:19:25.742 "assigned_rate_limits": { 00:19:25.742 "rw_ios_per_sec": 0, 00:19:25.742 "rw_mbytes_per_sec": 0, 00:19:25.742 "r_mbytes_per_sec": 0, 00:19:25.742 "w_mbytes_per_sec": 0 00:19:25.742 }, 00:19:25.742 "claimed": true, 00:19:25.742 "claim_type": "exclusive_write", 00:19:25.742 "zoned": false, 00:19:25.742 "supported_io_types": { 00:19:25.742 "read": true, 00:19:25.742 "write": true, 00:19:25.742 "unmap": true, 00:19:25.742 "flush": true, 00:19:25.742 "reset": true, 00:19:25.742 "nvme_admin": false, 00:19:25.742 "nvme_io": false, 00:19:25.742 "nvme_io_md": false, 00:19:25.742 "write_zeroes": true, 00:19:25.742 "zcopy": true, 00:19:25.742 "get_zone_info": false, 00:19:25.742 "zone_management": false, 00:19:25.742 "zone_append": false, 00:19:25.742 "compare": false, 00:19:25.742 "compare_and_write": false, 00:19:25.742 "abort": true, 00:19:25.742 "seek_hole": false, 00:19:25.742 "seek_data": false, 00:19:25.742 "copy": true, 00:19:25.742 "nvme_iov_md": false 00:19:25.742 }, 00:19:25.742 "memory_domains": [ 00:19:25.742 { 00:19:25.742 "dma_device_id": "system", 00:19:25.742 "dma_device_type": 1 00:19:25.742 }, 00:19:25.742 { 00:19:25.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.743 "dma_device_type": 2 00:19:25.743 } 00:19:25.743 ], 00:19:25.743 "driver_specific": {} 00:19:25.743 }' 00:19:25.743 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.743 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.743 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.743 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.000 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.000 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.000 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.000 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.000 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.000 12:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.000 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.000 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.000 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.000 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:26.000 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.257 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.257 "name": "BaseBdev3", 00:19:26.257 "aliases": [ 00:19:26.257 "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b" 00:19:26.257 ], 00:19:26.257 "product_name": "Malloc disk", 00:19:26.257 "block_size": 512, 00:19:26.257 "num_blocks": 65536, 00:19:26.257 "uuid": "6a62296e-2bf6-4b4d-bf3a-426937eaeb7b", 00:19:26.257 "assigned_rate_limits": { 00:19:26.257 "rw_ios_per_sec": 0, 00:19:26.257 "rw_mbytes_per_sec": 0, 00:19:26.257 "r_mbytes_per_sec": 0, 00:19:26.257 "w_mbytes_per_sec": 0 00:19:26.257 }, 00:19:26.257 "claimed": true, 00:19:26.257 "claim_type": "exclusive_write", 00:19:26.257 "zoned": false, 00:19:26.257 "supported_io_types": { 00:19:26.257 "read": true, 00:19:26.257 "write": true, 00:19:26.257 "unmap": true, 00:19:26.257 "flush": true, 00:19:26.257 "reset": true, 00:19:26.257 "nvme_admin": false, 00:19:26.257 "nvme_io": false, 00:19:26.257 "nvme_io_md": false, 00:19:26.257 "write_zeroes": true, 00:19:26.257 "zcopy": true, 00:19:26.257 "get_zone_info": false, 00:19:26.257 "zone_management": false, 00:19:26.257 "zone_append": false, 00:19:26.257 "compare": false, 00:19:26.257 "compare_and_write": false, 00:19:26.257 "abort": true, 00:19:26.257 "seek_hole": false, 00:19:26.257 "seek_data": false, 00:19:26.257 "copy": true, 00:19:26.257 "nvme_iov_md": false 00:19:26.257 }, 00:19:26.257 "memory_domains": [ 00:19:26.257 { 00:19:26.257 "dma_device_id": "system", 00:19:26.257 "dma_device_type": 1 00:19:26.257 }, 00:19:26.257 { 00:19:26.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.257 "dma_device_type": 2 00:19:26.257 } 00:19:26.257 ], 00:19:26.257 "driver_specific": {} 00:19:26.257 }' 00:19:26.257 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.257 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.514 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.771 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.771 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.771 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:26.771 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.771 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.771 "name": "BaseBdev4", 00:19:26.771 "aliases": [ 00:19:26.771 "ff96fd86-c127-4484-ad80-e868e38b2711" 00:19:26.771 ], 00:19:26.771 "product_name": "Malloc disk", 00:19:26.771 "block_size": 512, 00:19:26.771 "num_blocks": 65536, 00:19:26.771 "uuid": "ff96fd86-c127-4484-ad80-e868e38b2711", 00:19:26.771 "assigned_rate_limits": { 00:19:26.771 "rw_ios_per_sec": 0, 00:19:26.771 "rw_mbytes_per_sec": 0, 00:19:26.771 "r_mbytes_per_sec": 0, 00:19:26.771 "w_mbytes_per_sec": 0 00:19:26.771 }, 00:19:26.771 "claimed": true, 00:19:26.771 "claim_type": "exclusive_write", 00:19:26.771 "zoned": false, 00:19:26.771 "supported_io_types": { 00:19:26.771 "read": true, 00:19:26.771 "write": true, 00:19:26.771 "unmap": true, 00:19:26.771 "flush": true, 00:19:26.771 "reset": true, 00:19:26.771 "nvme_admin": false, 00:19:26.771 "nvme_io": false, 00:19:26.771 "nvme_io_md": false, 00:19:26.771 "write_zeroes": true, 00:19:26.771 "zcopy": true, 00:19:26.771 "get_zone_info": false, 00:19:26.771 "zone_management": false, 00:19:26.771 "zone_append": false, 00:19:26.771 "compare": false, 00:19:26.771 "compare_and_write": false, 00:19:26.771 "abort": true, 00:19:26.771 "seek_hole": false, 00:19:26.771 "seek_data": false, 00:19:26.771 "copy": true, 00:19:26.771 "nvme_iov_md": false 00:19:26.771 }, 00:19:26.771 "memory_domains": [ 00:19:26.771 { 00:19:26.771 "dma_device_id": "system", 00:19:26.771 "dma_device_type": 1 00:19:26.771 }, 00:19:26.771 { 00:19:26.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.771 "dma_device_type": 2 00:19:26.771 } 00:19:26.771 ], 00:19:26.771 "driver_specific": {} 00:19:26.771 }' 00:19:26.771 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.028 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.028 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:27.028 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.028 12:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.028 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:27.028 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.028 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.028 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.028 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.285 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.285 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.285 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:27.557 [2024-07-25 12:01:13.413565] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.557 [2024-07-25 12:01:13.413586] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.557 [2024-07-25 12:01:13.413631] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.557 [2024-07-25 12:01:13.413689] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.557 [2024-07-25 12:01:13.413700] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1465530 name Existed_Raid, state offline 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 4189236 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 4189236 ']' 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 4189236 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4189236 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4189236' 00:19:27.557 killing process with pid 4189236 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 4189236 00:19:27.557 [2024-07-25 12:01:13.487054] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:27.557 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 4189236 00:19:27.557 [2024-07-25 12:01:13.517737] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.820 12:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.820 00:19:27.820 real 0m30.388s 00:19:27.820 user 0m55.828s 00:19:27.820 sys 0m5.513s 00:19:27.820 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.820 12:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 ************************************ 00:19:27.820 END TEST raid_state_function_test_sb 00:19:27.820 ************************************ 00:19:27.820 12:01:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:27.820 12:01:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:27.820 12:01:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.820 12:01:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 ************************************ 00:19:27.820 START TEST raid_superblock_test 00:19:27.820 ************************************ 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1354 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1354 /var/tmp/spdk-raid.sock 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1354 ']' 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.820 12:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 [2024-07-25 12:01:13.852459] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:27.820 [2024-07-25 12:01:13.852515] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354 ] 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:27.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.820 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:28.077 [2024-07-25 12:01:13.975385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.077 [2024-07-25 12:01:14.060627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.077 [2024-07-25 12:01:14.114521] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.077 [2024-07-25 12:01:14.114547] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.641 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:28.898 malloc1 00:19:28.898 12:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.154 [2024-07-25 12:01:15.194463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.154 [2024-07-25 12:01:15.194508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.155 [2024-07-25 12:01:15.194526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26172f0 00:19:29.155 [2024-07-25 12:01:15.194538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.155 [2024-07-25 12:01:15.196063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.155 [2024-07-25 12:01:15.196093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.155 pt1 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.155 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:29.412 malloc2 00:19:29.412 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.688 [2024-07-25 12:01:15.660116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.688 [2024-07-25 12:01:15.660170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.688 [2024-07-25 12:01:15.660187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26186d0 00:19:29.688 [2024-07-25 12:01:15.660199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.688 [2024-07-25 12:01:15.661653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.688 [2024-07-25 12:01:15.661680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.688 pt2 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.688 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:29.945 malloc3 00:19:29.945 12:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:30.202 [2024-07-25 12:01:16.117614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:30.202 [2024-07-25 12:01:16.117655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.202 [2024-07-25 12:01:16.117671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27b16b0 00:19:30.202 [2024-07-25 12:01:16.117682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.202 [2024-07-25 12:01:16.119036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.202 [2024-07-25 12:01:16.119062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:30.202 pt3 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:30.202 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:30.459 malloc4 00:19:30.459 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:30.716 [2024-07-25 12:01:16.579097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:30.716 [2024-07-25 12:01:16.579145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.716 [2024-07-25 12:01:16.579162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27af370 00:19:30.716 [2024-07-25 12:01:16.579174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.716 [2024-07-25 12:01:16.580534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.716 [2024-07-25 12:01:16.580562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:30.716 pt4 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:30.716 [2024-07-25 12:01:16.803716] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:30.716 [2024-07-25 12:01:16.804871] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.716 [2024-07-25 12:01:16.804923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:30.716 [2024-07-25 12:01:16.804965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:30.716 [2024-07-25 12:01:16.805123] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2610560 00:19:30.716 [2024-07-25 12:01:16.805133] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:30.716 [2024-07-25 12:01:16.805326] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27b0760 00:19:30.716 [2024-07-25 12:01:16.805461] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2610560 00:19:30.716 [2024-07-25 12:01:16.805470] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2610560 00:19:30.716 [2024-07-25 12:01:16.805558] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.716 12:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.972 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.972 "name": "raid_bdev1", 00:19:30.972 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:30.972 "strip_size_kb": 64, 00:19:30.972 "state": "online", 00:19:30.972 "raid_level": "concat", 00:19:30.972 "superblock": true, 00:19:30.972 "num_base_bdevs": 4, 00:19:30.972 "num_base_bdevs_discovered": 4, 00:19:30.972 "num_base_bdevs_operational": 4, 00:19:30.972 "base_bdevs_list": [ 00:19:30.972 { 00:19:30.972 "name": "pt1", 00:19:30.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.972 "is_configured": true, 00:19:30.972 "data_offset": 2048, 00:19:30.972 "data_size": 63488 00:19:30.972 }, 00:19:30.972 { 00:19:30.972 "name": "pt2", 00:19:30.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.972 "is_configured": true, 00:19:30.972 "data_offset": 2048, 00:19:30.972 "data_size": 63488 00:19:30.972 }, 00:19:30.972 { 00:19:30.972 "name": "pt3", 00:19:30.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.972 "is_configured": true, 00:19:30.972 "data_offset": 2048, 00:19:30.972 "data_size": 63488 00:19:30.972 }, 00:19:30.972 { 00:19:30.972 "name": "pt4", 00:19:30.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:30.972 "is_configured": true, 00:19:30.972 "data_offset": 2048, 00:19:30.972 "data_size": 63488 00:19:30.972 } 00:19:30.972 ] 00:19:30.972 }' 00:19:30.972 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.972 12:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:31.536 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:31.793 [2024-07-25 12:01:17.838687] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.793 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:31.793 "name": "raid_bdev1", 00:19:31.793 "aliases": [ 00:19:31.793 "58106461-769a-4edb-9acf-e41a5b128919" 00:19:31.793 ], 00:19:31.793 "product_name": "Raid Volume", 00:19:31.793 "block_size": 512, 00:19:31.793 "num_blocks": 253952, 00:19:31.793 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:31.793 "assigned_rate_limits": { 00:19:31.793 "rw_ios_per_sec": 0, 00:19:31.793 "rw_mbytes_per_sec": 0, 00:19:31.793 "r_mbytes_per_sec": 0, 00:19:31.793 "w_mbytes_per_sec": 0 00:19:31.793 }, 00:19:31.793 "claimed": false, 00:19:31.793 "zoned": false, 00:19:31.793 "supported_io_types": { 00:19:31.793 "read": true, 00:19:31.793 "write": true, 00:19:31.793 "unmap": true, 00:19:31.793 "flush": true, 00:19:31.793 "reset": true, 00:19:31.793 "nvme_admin": false, 00:19:31.793 "nvme_io": false, 00:19:31.793 "nvme_io_md": false, 00:19:31.793 "write_zeroes": true, 00:19:31.793 "zcopy": false, 00:19:31.793 "get_zone_info": false, 00:19:31.793 "zone_management": false, 00:19:31.793 "zone_append": false, 00:19:31.793 "compare": false, 00:19:31.793 "compare_and_write": false, 00:19:31.793 "abort": false, 00:19:31.793 "seek_hole": false, 00:19:31.793 "seek_data": false, 00:19:31.793 "copy": false, 00:19:31.793 "nvme_iov_md": false 00:19:31.793 }, 00:19:31.793 "memory_domains": [ 00:19:31.793 { 00:19:31.793 "dma_device_id": "system", 00:19:31.793 "dma_device_type": 1 00:19:31.793 }, 00:19:31.793 { 00:19:31.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.793 "dma_device_type": 2 00:19:31.793 }, 00:19:31.793 { 00:19:31.793 "dma_device_id": "system", 00:19:31.793 "dma_device_type": 1 00:19:31.793 }, 00:19:31.793 { 00:19:31.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.793 "dma_device_type": 2 00:19:31.793 }, 00:19:31.793 { 00:19:31.793 "dma_device_id": "system", 00:19:31.793 "dma_device_type": 1 00:19:31.793 }, 00:19:31.793 { 00:19:31.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.793 "dma_device_type": 2 00:19:31.793 }, 00:19:31.793 { 00:19:31.793 "dma_device_id": "system", 00:19:31.793 "dma_device_type": 1 00:19:31.794 }, 00:19:31.794 { 00:19:31.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.794 "dma_device_type": 2 00:19:31.794 } 00:19:31.794 ], 00:19:31.794 "driver_specific": { 00:19:31.794 "raid": { 00:19:31.794 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:31.794 "strip_size_kb": 64, 00:19:31.794 "state": "online", 00:19:31.794 "raid_level": "concat", 00:19:31.794 "superblock": true, 00:19:31.794 "num_base_bdevs": 4, 00:19:31.794 "num_base_bdevs_discovered": 4, 00:19:31.794 "num_base_bdevs_operational": 4, 00:19:31.794 "base_bdevs_list": [ 00:19:31.794 { 00:19:31.794 "name": "pt1", 00:19:31.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:31.794 "is_configured": true, 00:19:31.794 "data_offset": 2048, 00:19:31.794 "data_size": 63488 00:19:31.794 }, 00:19:31.794 { 00:19:31.794 "name": "pt2", 00:19:31.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.794 "is_configured": true, 00:19:31.794 "data_offset": 2048, 00:19:31.794 "data_size": 63488 00:19:31.794 }, 00:19:31.794 { 00:19:31.794 "name": "pt3", 00:19:31.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.794 "is_configured": true, 00:19:31.794 "data_offset": 2048, 00:19:31.794 "data_size": 63488 00:19:31.794 }, 00:19:31.794 { 00:19:31.794 "name": "pt4", 00:19:31.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:31.794 "is_configured": true, 00:19:31.794 "data_offset": 2048, 00:19:31.794 "data_size": 63488 00:19:31.794 } 00:19:31.794 ] 00:19:31.794 } 00:19:31.794 } 00:19:31.794 }' 00:19:31.794 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.794 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:31.794 pt2 00:19:31.794 pt3 00:19:31.794 pt4' 00:19:31.794 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.794 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:31.794 12:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.060 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.061 "name": "pt1", 00:19:32.061 "aliases": [ 00:19:32.061 "00000000-0000-0000-0000-000000000001" 00:19:32.061 ], 00:19:32.061 "product_name": "passthru", 00:19:32.061 "block_size": 512, 00:19:32.061 "num_blocks": 65536, 00:19:32.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:32.061 "assigned_rate_limits": { 00:19:32.061 "rw_ios_per_sec": 0, 00:19:32.061 "rw_mbytes_per_sec": 0, 00:19:32.061 "r_mbytes_per_sec": 0, 00:19:32.061 "w_mbytes_per_sec": 0 00:19:32.061 }, 00:19:32.061 "claimed": true, 00:19:32.061 "claim_type": "exclusive_write", 00:19:32.061 "zoned": false, 00:19:32.061 "supported_io_types": { 00:19:32.061 "read": true, 00:19:32.061 "write": true, 00:19:32.061 "unmap": true, 00:19:32.061 "flush": true, 00:19:32.061 "reset": true, 00:19:32.061 "nvme_admin": false, 00:19:32.061 "nvme_io": false, 00:19:32.061 "nvme_io_md": false, 00:19:32.061 "write_zeroes": true, 00:19:32.061 "zcopy": true, 00:19:32.061 "get_zone_info": false, 00:19:32.061 "zone_management": false, 00:19:32.061 "zone_append": false, 00:19:32.061 "compare": false, 00:19:32.061 "compare_and_write": false, 00:19:32.061 "abort": true, 00:19:32.061 "seek_hole": false, 00:19:32.061 "seek_data": false, 00:19:32.061 "copy": true, 00:19:32.061 "nvme_iov_md": false 00:19:32.061 }, 00:19:32.061 "memory_domains": [ 00:19:32.061 { 00:19:32.061 "dma_device_id": "system", 00:19:32.061 "dma_device_type": 1 00:19:32.061 }, 00:19:32.061 { 00:19:32.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.061 "dma_device_type": 2 00:19:32.061 } 00:19:32.061 ], 00:19:32.061 "driver_specific": { 00:19:32.061 "passthru": { 00:19:32.061 "name": "pt1", 00:19:32.061 "base_bdev_name": "malloc1" 00:19:32.061 } 00:19:32.061 } 00:19:32.061 }' 00:19:32.061 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.322 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.578 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.578 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.578 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:32.578 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.578 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.578 "name": "pt2", 00:19:32.578 "aliases": [ 00:19:32.578 "00000000-0000-0000-0000-000000000002" 00:19:32.578 ], 00:19:32.578 "product_name": "passthru", 00:19:32.578 "block_size": 512, 00:19:32.578 "num_blocks": 65536, 00:19:32.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.578 "assigned_rate_limits": { 00:19:32.578 "rw_ios_per_sec": 0, 00:19:32.578 "rw_mbytes_per_sec": 0, 00:19:32.578 "r_mbytes_per_sec": 0, 00:19:32.578 "w_mbytes_per_sec": 0 00:19:32.578 }, 00:19:32.578 "claimed": true, 00:19:32.578 "claim_type": "exclusive_write", 00:19:32.578 "zoned": false, 00:19:32.578 "supported_io_types": { 00:19:32.578 "read": true, 00:19:32.578 "write": true, 00:19:32.578 "unmap": true, 00:19:32.578 "flush": true, 00:19:32.578 "reset": true, 00:19:32.578 "nvme_admin": false, 00:19:32.578 "nvme_io": false, 00:19:32.578 "nvme_io_md": false, 00:19:32.578 "write_zeroes": true, 00:19:32.578 "zcopy": true, 00:19:32.578 "get_zone_info": false, 00:19:32.578 "zone_management": false, 00:19:32.578 "zone_append": false, 00:19:32.578 "compare": false, 00:19:32.578 "compare_and_write": false, 00:19:32.578 "abort": true, 00:19:32.578 "seek_hole": false, 00:19:32.578 "seek_data": false, 00:19:32.578 "copy": true, 00:19:32.578 "nvme_iov_md": false 00:19:32.578 }, 00:19:32.578 "memory_domains": [ 00:19:32.578 { 00:19:32.578 "dma_device_id": "system", 00:19:32.578 "dma_device_type": 1 00:19:32.578 }, 00:19:32.578 { 00:19:32.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.578 "dma_device_type": 2 00:19:32.578 } 00:19:32.578 ], 00:19:32.578 "driver_specific": { 00:19:32.578 "passthru": { 00:19:32.578 "name": "pt2", 00:19:32.578 "base_bdev_name": "malloc2" 00:19:32.578 } 00:19:32.578 } 00:19:32.578 }' 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.835 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.090 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.090 12:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.090 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.090 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.090 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:33.090 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.347 "name": "pt3", 00:19:33.347 "aliases": [ 00:19:33.347 "00000000-0000-0000-0000-000000000003" 00:19:33.347 ], 00:19:33.347 "product_name": "passthru", 00:19:33.347 "block_size": 512, 00:19:33.347 "num_blocks": 65536, 00:19:33.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:33.347 "assigned_rate_limits": { 00:19:33.347 "rw_ios_per_sec": 0, 00:19:33.347 "rw_mbytes_per_sec": 0, 00:19:33.347 "r_mbytes_per_sec": 0, 00:19:33.347 "w_mbytes_per_sec": 0 00:19:33.347 }, 00:19:33.347 "claimed": true, 00:19:33.347 "claim_type": "exclusive_write", 00:19:33.347 "zoned": false, 00:19:33.347 "supported_io_types": { 00:19:33.347 "read": true, 00:19:33.347 "write": true, 00:19:33.347 "unmap": true, 00:19:33.347 "flush": true, 00:19:33.347 "reset": true, 00:19:33.347 "nvme_admin": false, 00:19:33.347 "nvme_io": false, 00:19:33.347 "nvme_io_md": false, 00:19:33.347 "write_zeroes": true, 00:19:33.347 "zcopy": true, 00:19:33.347 "get_zone_info": false, 00:19:33.347 "zone_management": false, 00:19:33.347 "zone_append": false, 00:19:33.347 "compare": false, 00:19:33.347 "compare_and_write": false, 00:19:33.347 "abort": true, 00:19:33.347 "seek_hole": false, 00:19:33.347 "seek_data": false, 00:19:33.347 "copy": true, 00:19:33.347 "nvme_iov_md": false 00:19:33.347 }, 00:19:33.347 "memory_domains": [ 00:19:33.347 { 00:19:33.347 "dma_device_id": "system", 00:19:33.347 "dma_device_type": 1 00:19:33.347 }, 00:19:33.347 { 00:19:33.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.347 "dma_device_type": 2 00:19:33.347 } 00:19:33.347 ], 00:19:33.347 "driver_specific": { 00:19:33.347 "passthru": { 00:19:33.347 "name": "pt3", 00:19:33.347 "base_bdev_name": "malloc3" 00:19:33.347 } 00:19:33.347 } 00:19:33.347 }' 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.347 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:33.604 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.862 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.862 "name": "pt4", 00:19:33.862 "aliases": [ 00:19:33.862 "00000000-0000-0000-0000-000000000004" 00:19:33.862 ], 00:19:33.862 "product_name": "passthru", 00:19:33.862 "block_size": 512, 00:19:33.862 "num_blocks": 65536, 00:19:33.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:33.862 "assigned_rate_limits": { 00:19:33.862 "rw_ios_per_sec": 0, 00:19:33.862 "rw_mbytes_per_sec": 0, 00:19:33.862 "r_mbytes_per_sec": 0, 00:19:33.862 "w_mbytes_per_sec": 0 00:19:33.862 }, 00:19:33.862 "claimed": true, 00:19:33.862 "claim_type": "exclusive_write", 00:19:33.862 "zoned": false, 00:19:33.862 "supported_io_types": { 00:19:33.862 "read": true, 00:19:33.862 "write": true, 00:19:33.862 "unmap": true, 00:19:33.862 "flush": true, 00:19:33.862 "reset": true, 00:19:33.862 "nvme_admin": false, 00:19:33.862 "nvme_io": false, 00:19:33.862 "nvme_io_md": false, 00:19:33.862 "write_zeroes": true, 00:19:33.862 "zcopy": true, 00:19:33.862 "get_zone_info": false, 00:19:33.862 "zone_management": false, 00:19:33.862 "zone_append": false, 00:19:33.862 "compare": false, 00:19:33.862 "compare_and_write": false, 00:19:33.862 "abort": true, 00:19:33.862 "seek_hole": false, 00:19:33.862 "seek_data": false, 00:19:33.862 "copy": true, 00:19:33.862 "nvme_iov_md": false 00:19:33.862 }, 00:19:33.862 "memory_domains": [ 00:19:33.862 { 00:19:33.862 "dma_device_id": "system", 00:19:33.862 "dma_device_type": 1 00:19:33.862 }, 00:19:33.862 { 00:19:33.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.862 "dma_device_type": 2 00:19:33.862 } 00:19:33.862 ], 00:19:33.862 "driver_specific": { 00:19:33.862 "passthru": { 00:19:33.862 "name": "pt4", 00:19:33.862 "base_bdev_name": "malloc4" 00:19:33.862 } 00:19:33.862 } 00:19:33.862 }' 00:19:33.862 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.862 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.862 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.862 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.862 12:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:34.119 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:34.376 [2024-07-25 12:01:20.381359] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.376 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=58106461-769a-4edb-9acf-e41a5b128919 00:19:34.376 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 58106461-769a-4edb-9acf-e41a5b128919 ']' 00:19:34.376 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:34.633 [2024-07-25 12:01:20.609673] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.633 [2024-07-25 12:01:20.609695] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.633 [2024-07-25 12:01:20.609743] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.633 [2024-07-25 12:01:20.609801] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.633 [2024-07-25 12:01:20.609812] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2610560 name raid_bdev1, state offline 00:19:34.633 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.633 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:34.890 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:34.890 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:34.890 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.890 12:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:35.146 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.146 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:35.402 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.403 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:35.403 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.403 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:35.659 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:35.659 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:35.916 12:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:36.173 [2024-07-25 12:01:22.181735] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:36.173 [2024-07-25 12:01:22.182995] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:36.173 [2024-07-25 12:01:22.183037] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:36.173 [2024-07-25 12:01:22.183068] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:36.173 [2024-07-25 12:01:22.183113] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:36.173 [2024-07-25 12:01:22.183157] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:36.173 [2024-07-25 12:01:22.183178] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:36.173 [2024-07-25 12:01:22.183204] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:36.173 [2024-07-25 12:01:22.183221] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.173 [2024-07-25 12:01:22.183231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27bad50 name raid_bdev1, state configuring 00:19:36.173 request: 00:19:36.173 { 00:19:36.173 "name": "raid_bdev1", 00:19:36.173 "raid_level": "concat", 00:19:36.173 "base_bdevs": [ 00:19:36.173 "malloc1", 00:19:36.173 "malloc2", 00:19:36.173 "malloc3", 00:19:36.173 "malloc4" 00:19:36.173 ], 00:19:36.173 "strip_size_kb": 64, 00:19:36.173 "superblock": false, 00:19:36.173 "method": "bdev_raid_create", 00:19:36.173 "req_id": 1 00:19:36.173 } 00:19:36.173 Got JSON-RPC error response 00:19:36.174 response: 00:19:36.174 { 00:19:36.174 "code": -17, 00:19:36.174 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:36.174 } 00:19:36.174 12:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:36.174 12:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.174 12:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.174 12:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.174 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.174 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:36.431 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:36.431 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:36.432 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.689 [2024-07-25 12:01:22.626846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.689 [2024-07-25 12:01:22.626887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.689 [2024-07-25 12:01:22.626904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27ba3f0 00:19:36.689 [2024-07-25 12:01:22.626915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.689 [2024-07-25 12:01:22.628393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.689 [2024-07-25 12:01:22.628420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.689 [2024-07-25 12:01:22.628480] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:36.689 [2024-07-25 12:01:22.628505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.689 pt1 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.689 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.946 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.946 "name": "raid_bdev1", 00:19:36.946 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:36.946 "strip_size_kb": 64, 00:19:36.946 "state": "configuring", 00:19:36.946 "raid_level": "concat", 00:19:36.946 "superblock": true, 00:19:36.946 "num_base_bdevs": 4, 00:19:36.946 "num_base_bdevs_discovered": 1, 00:19:36.946 "num_base_bdevs_operational": 4, 00:19:36.946 "base_bdevs_list": [ 00:19:36.946 { 00:19:36.946 "name": "pt1", 00:19:36.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.946 "is_configured": true, 00:19:36.946 "data_offset": 2048, 00:19:36.946 "data_size": 63488 00:19:36.946 }, 00:19:36.946 { 00:19:36.946 "name": null, 00:19:36.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.946 "is_configured": false, 00:19:36.946 "data_offset": 2048, 00:19:36.946 "data_size": 63488 00:19:36.946 }, 00:19:36.946 { 00:19:36.946 "name": null, 00:19:36.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:36.946 "is_configured": false, 00:19:36.946 "data_offset": 2048, 00:19:36.946 "data_size": 63488 00:19:36.946 }, 00:19:36.946 { 00:19:36.946 "name": null, 00:19:36.946 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:36.946 "is_configured": false, 00:19:36.946 "data_offset": 2048, 00:19:36.946 "data_size": 63488 00:19:36.946 } 00:19:36.946 ] 00:19:36.946 }' 00:19:36.946 12:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.946 12:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.526 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:19:37.526 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.783 [2024-07-25 12:01:23.653573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.783 [2024-07-25 12:01:23.653620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.783 [2024-07-25 12:01:23.653640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x260f0e0 00:19:37.783 [2024-07-25 12:01:23.653651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.783 [2024-07-25 12:01:23.653981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.784 [2024-07-25 12:01:23.653996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.784 [2024-07-25 12:01:23.654056] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:37.784 [2024-07-25 12:01:23.654072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.784 pt2 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:37.784 [2024-07-25 12:01:23.878191] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.784 12:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.040 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.040 "name": "raid_bdev1", 00:19:38.040 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:38.040 "strip_size_kb": 64, 00:19:38.040 "state": "configuring", 00:19:38.040 "raid_level": "concat", 00:19:38.040 "superblock": true, 00:19:38.040 "num_base_bdevs": 4, 00:19:38.040 "num_base_bdevs_discovered": 1, 00:19:38.040 "num_base_bdevs_operational": 4, 00:19:38.040 "base_bdevs_list": [ 00:19:38.040 { 00:19:38.040 "name": "pt1", 00:19:38.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.040 "is_configured": true, 00:19:38.040 "data_offset": 2048, 00:19:38.040 "data_size": 63488 00:19:38.040 }, 00:19:38.040 { 00:19:38.040 "name": null, 00:19:38.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.040 "is_configured": false, 00:19:38.040 "data_offset": 2048, 00:19:38.040 "data_size": 63488 00:19:38.040 }, 00:19:38.040 { 00:19:38.040 "name": null, 00:19:38.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.040 "is_configured": false, 00:19:38.040 "data_offset": 2048, 00:19:38.040 "data_size": 63488 00:19:38.040 }, 00:19:38.040 { 00:19:38.040 "name": null, 00:19:38.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:38.040 "is_configured": false, 00:19:38.040 "data_offset": 2048, 00:19:38.040 "data_size": 63488 00:19:38.040 } 00:19:38.040 ] 00:19:38.040 }' 00:19:38.041 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.041 12:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.604 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:38.604 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:38.604 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.860 [2024-07-25 12:01:24.924922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.860 [2024-07-25 12:01:24.924966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.860 [2024-07-25 12:01:24.924986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2617520 00:19:38.860 [2024-07-25 12:01:24.924997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.860 [2024-07-25 12:01:24.925325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.860 [2024-07-25 12:01:24.925341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.860 [2024-07-25 12:01:24.925398] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:38.860 [2024-07-25 12:01:24.925416] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.860 pt2 00:19:38.860 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:38.860 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:38.860 12:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:39.116 [2024-07-25 12:01:25.153531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:39.116 [2024-07-25 12:01:25.153556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.116 [2024-07-25 12:01:25.153570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26116e0 00:19:39.116 [2024-07-25 12:01:25.153581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.116 [2024-07-25 12:01:25.153840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.116 [2024-07-25 12:01:25.153855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:39.116 [2024-07-25 12:01:25.153899] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:39.116 [2024-07-25 12:01:25.153914] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:39.116 pt3 00:19:39.116 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:39.116 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:39.116 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:39.372 [2024-07-25 12:01:25.378112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:39.372 [2024-07-25 12:01:25.378151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.372 [2024-07-25 12:01:25.378166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x260e0f0 00:19:39.372 [2024-07-25 12:01:25.378177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.372 [2024-07-25 12:01:25.378423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.372 [2024-07-25 12:01:25.378439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:39.372 [2024-07-25 12:01:25.378481] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:39.372 [2024-07-25 12:01:25.378497] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:39.372 [2024-07-25 12:01:25.378600] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2610e40 00:19:39.372 [2024-07-25 12:01:25.378609] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:39.372 [2024-07-25 12:01:25.378766] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x260ee80 00:19:39.372 [2024-07-25 12:01:25.378882] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2610e40 00:19:39.372 [2024-07-25 12:01:25.378891] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2610e40 00:19:39.372 [2024-07-25 12:01:25.378975] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.372 pt4 00:19:39.372 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:39.372 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:39.372 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:39.372 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.373 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.629 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.629 "name": "raid_bdev1", 00:19:39.629 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:39.629 "strip_size_kb": 64, 00:19:39.629 "state": "online", 00:19:39.629 "raid_level": "concat", 00:19:39.629 "superblock": true, 00:19:39.629 "num_base_bdevs": 4, 00:19:39.629 "num_base_bdevs_discovered": 4, 00:19:39.629 "num_base_bdevs_operational": 4, 00:19:39.629 "base_bdevs_list": [ 00:19:39.629 { 00:19:39.629 "name": "pt1", 00:19:39.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.629 "is_configured": true, 00:19:39.629 "data_offset": 2048, 00:19:39.629 "data_size": 63488 00:19:39.629 }, 00:19:39.629 { 00:19:39.629 "name": "pt2", 00:19:39.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.629 "is_configured": true, 00:19:39.629 "data_offset": 2048, 00:19:39.629 "data_size": 63488 00:19:39.629 }, 00:19:39.629 { 00:19:39.629 "name": "pt3", 00:19:39.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:39.629 "is_configured": true, 00:19:39.629 "data_offset": 2048, 00:19:39.629 "data_size": 63488 00:19:39.629 }, 00:19:39.629 { 00:19:39.629 "name": "pt4", 00:19:39.629 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:39.629 "is_configured": true, 00:19:39.629 "data_offset": 2048, 00:19:39.629 "data_size": 63488 00:19:39.629 } 00:19:39.629 ] 00:19:39.629 }' 00:19:39.629 12:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.629 12:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:40.193 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:40.450 [2024-07-25 12:01:26.332925] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.450 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:40.450 "name": "raid_bdev1", 00:19:40.450 "aliases": [ 00:19:40.450 "58106461-769a-4edb-9acf-e41a5b128919" 00:19:40.450 ], 00:19:40.450 "product_name": "Raid Volume", 00:19:40.450 "block_size": 512, 00:19:40.450 "num_blocks": 253952, 00:19:40.450 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:40.450 "assigned_rate_limits": { 00:19:40.450 "rw_ios_per_sec": 0, 00:19:40.450 "rw_mbytes_per_sec": 0, 00:19:40.450 "r_mbytes_per_sec": 0, 00:19:40.450 "w_mbytes_per_sec": 0 00:19:40.450 }, 00:19:40.450 "claimed": false, 00:19:40.450 "zoned": false, 00:19:40.450 "supported_io_types": { 00:19:40.450 "read": true, 00:19:40.450 "write": true, 00:19:40.450 "unmap": true, 00:19:40.450 "flush": true, 00:19:40.450 "reset": true, 00:19:40.450 "nvme_admin": false, 00:19:40.450 "nvme_io": false, 00:19:40.450 "nvme_io_md": false, 00:19:40.450 "write_zeroes": true, 00:19:40.450 "zcopy": false, 00:19:40.450 "get_zone_info": false, 00:19:40.450 "zone_management": false, 00:19:40.450 "zone_append": false, 00:19:40.450 "compare": false, 00:19:40.450 "compare_and_write": false, 00:19:40.450 "abort": false, 00:19:40.450 "seek_hole": false, 00:19:40.450 "seek_data": false, 00:19:40.450 "copy": false, 00:19:40.450 "nvme_iov_md": false 00:19:40.450 }, 00:19:40.450 "memory_domains": [ 00:19:40.450 { 00:19:40.450 "dma_device_id": "system", 00:19:40.450 "dma_device_type": 1 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.450 "dma_device_type": 2 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "system", 00:19:40.450 "dma_device_type": 1 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.450 "dma_device_type": 2 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "system", 00:19:40.450 "dma_device_type": 1 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.450 "dma_device_type": 2 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "system", 00:19:40.450 "dma_device_type": 1 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.450 "dma_device_type": 2 00:19:40.450 } 00:19:40.450 ], 00:19:40.450 "driver_specific": { 00:19:40.450 "raid": { 00:19:40.450 "uuid": "58106461-769a-4edb-9acf-e41a5b128919", 00:19:40.450 "strip_size_kb": 64, 00:19:40.450 "state": "online", 00:19:40.450 "raid_level": "concat", 00:19:40.450 "superblock": true, 00:19:40.450 "num_base_bdevs": 4, 00:19:40.450 "num_base_bdevs_discovered": 4, 00:19:40.450 "num_base_bdevs_operational": 4, 00:19:40.450 "base_bdevs_list": [ 00:19:40.450 { 00:19:40.450 "name": "pt1", 00:19:40.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.450 "is_configured": true, 00:19:40.450 "data_offset": 2048, 00:19:40.450 "data_size": 63488 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "name": "pt2", 00:19:40.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.450 "is_configured": true, 00:19:40.450 "data_offset": 2048, 00:19:40.450 "data_size": 63488 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "name": "pt3", 00:19:40.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.450 "is_configured": true, 00:19:40.450 "data_offset": 2048, 00:19:40.450 "data_size": 63488 00:19:40.450 }, 00:19:40.450 { 00:19:40.450 "name": "pt4", 00:19:40.450 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:40.450 "is_configured": true, 00:19:40.450 "data_offset": 2048, 00:19:40.450 "data_size": 63488 00:19:40.450 } 00:19:40.450 ] 00:19:40.450 } 00:19:40.450 } 00:19:40.450 }' 00:19:40.450 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:40.450 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:40.450 pt2 00:19:40.450 pt3 00:19:40.450 pt4' 00:19:40.450 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.450 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:40.450 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:40.707 "name": "pt1", 00:19:40.707 "aliases": [ 00:19:40.707 "00000000-0000-0000-0000-000000000001" 00:19:40.707 ], 00:19:40.707 "product_name": "passthru", 00:19:40.707 "block_size": 512, 00:19:40.707 "num_blocks": 65536, 00:19:40.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.707 "assigned_rate_limits": { 00:19:40.707 "rw_ios_per_sec": 0, 00:19:40.707 "rw_mbytes_per_sec": 0, 00:19:40.707 "r_mbytes_per_sec": 0, 00:19:40.707 "w_mbytes_per_sec": 0 00:19:40.707 }, 00:19:40.707 "claimed": true, 00:19:40.707 "claim_type": "exclusive_write", 00:19:40.707 "zoned": false, 00:19:40.707 "supported_io_types": { 00:19:40.707 "read": true, 00:19:40.707 "write": true, 00:19:40.707 "unmap": true, 00:19:40.707 "flush": true, 00:19:40.707 "reset": true, 00:19:40.707 "nvme_admin": false, 00:19:40.707 "nvme_io": false, 00:19:40.707 "nvme_io_md": false, 00:19:40.707 "write_zeroes": true, 00:19:40.707 "zcopy": true, 00:19:40.707 "get_zone_info": false, 00:19:40.707 "zone_management": false, 00:19:40.707 "zone_append": false, 00:19:40.707 "compare": false, 00:19:40.707 "compare_and_write": false, 00:19:40.707 "abort": true, 00:19:40.707 "seek_hole": false, 00:19:40.707 "seek_data": false, 00:19:40.707 "copy": true, 00:19:40.707 "nvme_iov_md": false 00:19:40.707 }, 00:19:40.707 "memory_domains": [ 00:19:40.707 { 00:19:40.707 "dma_device_id": "system", 00:19:40.707 "dma_device_type": 1 00:19:40.707 }, 00:19:40.707 { 00:19:40.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.707 "dma_device_type": 2 00:19:40.707 } 00:19:40.707 ], 00:19:40.707 "driver_specific": { 00:19:40.707 "passthru": { 00:19:40.707 "name": "pt1", 00:19:40.707 "base_bdev_name": "malloc1" 00:19:40.707 } 00:19:40.707 } 00:19:40.707 }' 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.707 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.964 12:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:41.220 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:41.220 "name": "pt2", 00:19:41.220 "aliases": [ 00:19:41.220 "00000000-0000-0000-0000-000000000002" 00:19:41.220 ], 00:19:41.220 "product_name": "passthru", 00:19:41.220 "block_size": 512, 00:19:41.220 "num_blocks": 65536, 00:19:41.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.221 "assigned_rate_limits": { 00:19:41.221 "rw_ios_per_sec": 0, 00:19:41.221 "rw_mbytes_per_sec": 0, 00:19:41.221 "r_mbytes_per_sec": 0, 00:19:41.221 "w_mbytes_per_sec": 0 00:19:41.221 }, 00:19:41.221 "claimed": true, 00:19:41.221 "claim_type": "exclusive_write", 00:19:41.221 "zoned": false, 00:19:41.221 "supported_io_types": { 00:19:41.221 "read": true, 00:19:41.221 "write": true, 00:19:41.221 "unmap": true, 00:19:41.221 "flush": true, 00:19:41.221 "reset": true, 00:19:41.221 "nvme_admin": false, 00:19:41.221 "nvme_io": false, 00:19:41.221 "nvme_io_md": false, 00:19:41.221 "write_zeroes": true, 00:19:41.221 "zcopy": true, 00:19:41.221 "get_zone_info": false, 00:19:41.221 "zone_management": false, 00:19:41.221 "zone_append": false, 00:19:41.221 "compare": false, 00:19:41.221 "compare_and_write": false, 00:19:41.221 "abort": true, 00:19:41.221 "seek_hole": false, 00:19:41.221 "seek_data": false, 00:19:41.221 "copy": true, 00:19:41.221 "nvme_iov_md": false 00:19:41.221 }, 00:19:41.221 "memory_domains": [ 00:19:41.221 { 00:19:41.221 "dma_device_id": "system", 00:19:41.221 "dma_device_type": 1 00:19:41.221 }, 00:19:41.221 { 00:19:41.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.221 "dma_device_type": 2 00:19:41.221 } 00:19:41.221 ], 00:19:41.221 "driver_specific": { 00:19:41.221 "passthru": { 00:19:41.221 "name": "pt2", 00:19:41.221 "base_bdev_name": "malloc2" 00:19:41.221 } 00:19:41.221 } 00:19:41.221 }' 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:41.221 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:41.477 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:41.734 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:41.734 "name": "pt3", 00:19:41.734 "aliases": [ 00:19:41.734 "00000000-0000-0000-0000-000000000003" 00:19:41.734 ], 00:19:41.734 "product_name": "passthru", 00:19:41.734 "block_size": 512, 00:19:41.734 "num_blocks": 65536, 00:19:41.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.734 "assigned_rate_limits": { 00:19:41.734 "rw_ios_per_sec": 0, 00:19:41.734 "rw_mbytes_per_sec": 0, 00:19:41.734 "r_mbytes_per_sec": 0, 00:19:41.734 "w_mbytes_per_sec": 0 00:19:41.734 }, 00:19:41.734 "claimed": true, 00:19:41.734 "claim_type": "exclusive_write", 00:19:41.734 "zoned": false, 00:19:41.734 "supported_io_types": { 00:19:41.734 "read": true, 00:19:41.734 "write": true, 00:19:41.734 "unmap": true, 00:19:41.734 "flush": true, 00:19:41.734 "reset": true, 00:19:41.734 "nvme_admin": false, 00:19:41.734 "nvme_io": false, 00:19:41.734 "nvme_io_md": false, 00:19:41.734 "write_zeroes": true, 00:19:41.734 "zcopy": true, 00:19:41.734 "get_zone_info": false, 00:19:41.734 "zone_management": false, 00:19:41.734 "zone_append": false, 00:19:41.734 "compare": false, 00:19:41.734 "compare_and_write": false, 00:19:41.734 "abort": true, 00:19:41.734 "seek_hole": false, 00:19:41.734 "seek_data": false, 00:19:41.734 "copy": true, 00:19:41.734 "nvme_iov_md": false 00:19:41.734 }, 00:19:41.734 "memory_domains": [ 00:19:41.734 { 00:19:41.734 "dma_device_id": "system", 00:19:41.734 "dma_device_type": 1 00:19:41.734 }, 00:19:41.734 { 00:19:41.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.734 "dma_device_type": 2 00:19:41.734 } 00:19:41.734 ], 00:19:41.734 "driver_specific": { 00:19:41.734 "passthru": { 00:19:41.734 "name": "pt3", 00:19:41.734 "base_bdev_name": "malloc3" 00:19:41.734 } 00:19:41.734 } 00:19:41.734 }' 00:19:41.734 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.734 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.734 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:41.734 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.991 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.991 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:41.991 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.991 12:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.991 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:41.991 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.991 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:42.254 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:42.254 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:42.254 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:42.254 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:42.544 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:42.544 "name": "pt4", 00:19:42.544 "aliases": [ 00:19:42.544 "00000000-0000-0000-0000-000000000004" 00:19:42.544 ], 00:19:42.544 "product_name": "passthru", 00:19:42.544 "block_size": 512, 00:19:42.544 "num_blocks": 65536, 00:19:42.544 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:42.544 "assigned_rate_limits": { 00:19:42.544 "rw_ios_per_sec": 0, 00:19:42.544 "rw_mbytes_per_sec": 0, 00:19:42.544 "r_mbytes_per_sec": 0, 00:19:42.544 "w_mbytes_per_sec": 0 00:19:42.544 }, 00:19:42.544 "claimed": true, 00:19:42.544 "claim_type": "exclusive_write", 00:19:42.544 "zoned": false, 00:19:42.544 "supported_io_types": { 00:19:42.544 "read": true, 00:19:42.545 "write": true, 00:19:42.545 "unmap": true, 00:19:42.545 "flush": true, 00:19:42.545 "reset": true, 00:19:42.545 "nvme_admin": false, 00:19:42.545 "nvme_io": false, 00:19:42.545 "nvme_io_md": false, 00:19:42.545 "write_zeroes": true, 00:19:42.545 "zcopy": true, 00:19:42.545 "get_zone_info": false, 00:19:42.545 "zone_management": false, 00:19:42.545 "zone_append": false, 00:19:42.545 "compare": false, 00:19:42.545 "compare_and_write": false, 00:19:42.545 "abort": true, 00:19:42.545 "seek_hole": false, 00:19:42.545 "seek_data": false, 00:19:42.545 "copy": true, 00:19:42.545 "nvme_iov_md": false 00:19:42.545 }, 00:19:42.545 "memory_domains": [ 00:19:42.545 { 00:19:42.545 "dma_device_id": "system", 00:19:42.545 "dma_device_type": 1 00:19:42.545 }, 00:19:42.545 { 00:19:42.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.545 "dma_device_type": 2 00:19:42.545 } 00:19:42.545 ], 00:19:42.545 "driver_specific": { 00:19:42.545 "passthru": { 00:19:42.545 "name": "pt4", 00:19:42.545 "base_bdev_name": "malloc4" 00:19:42.545 } 00:19:42.545 } 00:19:42.545 }' 00:19:42.545 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:42.802 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.059 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.059 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:43.059 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.059 12:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:43.316 [2024-07-25 12:01:29.200500] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 58106461-769a-4edb-9acf-e41a5b128919 '!=' 58106461-769a-4edb-9acf-e41a5b128919 ']' 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1354 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1354 ']' 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1354 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1354 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1354' 00:19:43.316 killing process with pid 1354 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1354 00:19:43.316 [2024-07-25 12:01:29.278750] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:43.316 [2024-07-25 12:01:29.278806] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.316 [2024-07-25 12:01:29.278865] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.316 [2024-07-25 12:01:29.278876] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2610e40 name raid_bdev1, state offline 00:19:43.316 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1354 00:19:43.316 [2024-07-25 12:01:29.309975] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:43.574 12:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:43.574 00:19:43.574 real 0m15.707s 00:19:43.574 user 0m28.389s 00:19:43.574 sys 0m2.786s 00:19:43.574 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.574 12:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.574 ************************************ 00:19:43.574 END TEST raid_superblock_test 00:19:43.574 ************************************ 00:19:43.574 12:01:29 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:19:43.574 12:01:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:43.574 12:01:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.574 12:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.574 ************************************ 00:19:43.574 START TEST raid_read_error_test 00:19:43.574 ************************************ 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.YlfWsNd0QE 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=4491 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 4491 /var/tmp/spdk-raid.sock 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 4491 ']' 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:43.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.574 12:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.574 [2024-07-25 12:01:29.660064] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:43.574 [2024-07-25 12:01:29.660119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4491 ] 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:43.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.832 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:43.832 [2024-07-25 12:01:29.792651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.832 [2024-07-25 12:01:29.879344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.832 [2024-07-25 12:01:29.933912] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.832 [2024-07-25 12:01:29.933939] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.762 12:01:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.762 12:01:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:44.762 12:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:44.762 12:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:44.762 BaseBdev1_malloc 00:19:44.762 12:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:45.018 true 00:19:45.018 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:45.275 [2024-07-25 12:01:31.222183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:45.275 [2024-07-25 12:01:31.222223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.275 [2024-07-25 12:01:31.222240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ceb190 00:19:45.275 [2024-07-25 12:01:31.222253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.275 [2024-07-25 12:01:31.223828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.275 [2024-07-25 12:01:31.223854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.275 BaseBdev1 00:19:45.275 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:45.275 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.531 BaseBdev2_malloc 00:19:45.531 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:45.788 true 00:19:45.788 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:46.044 [2024-07-25 12:01:31.908183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:46.044 [2024-07-25 12:01:31.908223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.044 [2024-07-25 12:01:31.908241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cefe20 00:19:46.044 [2024-07-25 12:01:31.908252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.044 [2024-07-25 12:01:31.909629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.044 [2024-07-25 12:01:31.909656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:46.044 BaseBdev2 00:19:46.044 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:46.044 12:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:46.044 BaseBdev3_malloc 00:19:46.044 12:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:46.301 true 00:19:46.301 12:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:46.558 [2024-07-25 12:01:32.590276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:46.558 [2024-07-25 12:01:32.590314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.558 [2024-07-25 12:01:32.590335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cf0d90 00:19:46.558 [2024-07-25 12:01:32.590346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.558 [2024-07-25 12:01:32.591731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.558 [2024-07-25 12:01:32.591758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.558 BaseBdev3 00:19:46.558 12:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:46.558 12:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:46.814 BaseBdev4_malloc 00:19:46.814 12:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:47.071 true 00:19:47.071 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:47.327 [2024-07-25 12:01:33.280423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:47.327 [2024-07-25 12:01:33.280461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.327 [2024-07-25 12:01:33.280478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cf3000 00:19:47.327 [2024-07-25 12:01:33.280489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.327 [2024-07-25 12:01:33.281804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.328 [2024-07-25 12:01:33.281829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:47.328 BaseBdev4 00:19:47.328 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:47.585 [2024-07-25 12:01:33.521097] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.585 [2024-07-25 12:01:33.522312] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.585 [2024-07-25 12:01:33.522378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.585 [2024-07-25 12:01:33.522431] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:47.585 [2024-07-25 12:01:33.522645] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1cf3dd0 00:19:47.585 [2024-07-25 12:01:33.522656] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:47.585 [2024-07-25 12:01:33.522839] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cf5080 00:19:47.585 [2024-07-25 12:01:33.522977] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1cf3dd0 00:19:47.585 [2024-07-25 12:01:33.522987] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1cf3dd0 00:19:47.585 [2024-07-25 12:01:33.523081] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.585 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.842 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.842 "name": "raid_bdev1", 00:19:47.842 "uuid": "9fbdff64-9a3d-4ffe-a657-ee07a84befe2", 00:19:47.842 "strip_size_kb": 64, 00:19:47.842 "state": "online", 00:19:47.842 "raid_level": "concat", 00:19:47.842 "superblock": true, 00:19:47.842 "num_base_bdevs": 4, 00:19:47.842 "num_base_bdevs_discovered": 4, 00:19:47.842 "num_base_bdevs_operational": 4, 00:19:47.842 "base_bdevs_list": [ 00:19:47.842 { 00:19:47.842 "name": "BaseBdev1", 00:19:47.842 "uuid": "8aa62363-5abd-5c5f-91e0-2904a67a83f5", 00:19:47.842 "is_configured": true, 00:19:47.842 "data_offset": 2048, 00:19:47.842 "data_size": 63488 00:19:47.842 }, 00:19:47.842 { 00:19:47.842 "name": "BaseBdev2", 00:19:47.842 "uuid": "3b8c74ab-d1c1-50e0-89cf-3358c7f907bb", 00:19:47.842 "is_configured": true, 00:19:47.842 "data_offset": 2048, 00:19:47.842 "data_size": 63488 00:19:47.842 }, 00:19:47.842 { 00:19:47.842 "name": "BaseBdev3", 00:19:47.842 "uuid": "b4b5adac-048b-5fed-ab06-5e7ab24b27ea", 00:19:47.842 "is_configured": true, 00:19:47.842 "data_offset": 2048, 00:19:47.842 "data_size": 63488 00:19:47.842 }, 00:19:47.842 { 00:19:47.842 "name": "BaseBdev4", 00:19:47.842 "uuid": "f3403a8f-5185-5d32-a911-663412ff42c0", 00:19:47.842 "is_configured": true, 00:19:47.842 "data_offset": 2048, 00:19:47.842 "data_size": 63488 00:19:47.842 } 00:19:47.842 ] 00:19:47.842 }' 00:19:47.842 12:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.842 12:01:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.405 12:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:48.405 12:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:48.405 [2024-07-25 12:01:34.403644] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cf8c70 00:19:49.336 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.593 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.594 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.594 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.850 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.850 "name": "raid_bdev1", 00:19:49.850 "uuid": "9fbdff64-9a3d-4ffe-a657-ee07a84befe2", 00:19:49.850 "strip_size_kb": 64, 00:19:49.850 "state": "online", 00:19:49.850 "raid_level": "concat", 00:19:49.850 "superblock": true, 00:19:49.850 "num_base_bdevs": 4, 00:19:49.850 "num_base_bdevs_discovered": 4, 00:19:49.850 "num_base_bdevs_operational": 4, 00:19:49.850 "base_bdevs_list": [ 00:19:49.850 { 00:19:49.850 "name": "BaseBdev1", 00:19:49.850 "uuid": "8aa62363-5abd-5c5f-91e0-2904a67a83f5", 00:19:49.850 "is_configured": true, 00:19:49.850 "data_offset": 2048, 00:19:49.850 "data_size": 63488 00:19:49.850 }, 00:19:49.850 { 00:19:49.850 "name": "BaseBdev2", 00:19:49.850 "uuid": "3b8c74ab-d1c1-50e0-89cf-3358c7f907bb", 00:19:49.850 "is_configured": true, 00:19:49.850 "data_offset": 2048, 00:19:49.850 "data_size": 63488 00:19:49.850 }, 00:19:49.850 { 00:19:49.850 "name": "BaseBdev3", 00:19:49.850 "uuid": "b4b5adac-048b-5fed-ab06-5e7ab24b27ea", 00:19:49.850 "is_configured": true, 00:19:49.850 "data_offset": 2048, 00:19:49.850 "data_size": 63488 00:19:49.850 }, 00:19:49.850 { 00:19:49.850 "name": "BaseBdev4", 00:19:49.850 "uuid": "f3403a8f-5185-5d32-a911-663412ff42c0", 00:19:49.850 "is_configured": true, 00:19:49.850 "data_offset": 2048, 00:19:49.850 "data_size": 63488 00:19:49.850 } 00:19:49.850 ] 00:19:49.850 }' 00:19:49.850 12:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.850 12:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.414 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:50.672 [2024-07-25 12:01:36.542905] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.672 [2024-07-25 12:01:36.542932] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.672 [2024-07-25 12:01:36.545843] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.672 [2024-07-25 12:01:36.545880] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.672 [2024-07-25 12:01:36.545916] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.672 [2024-07-25 12:01:36.545927] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cf3dd0 name raid_bdev1, state offline 00:19:50.672 0 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 4491 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 4491 ']' 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 4491 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4491 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4491' 00:19:50.672 killing process with pid 4491 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 4491 00:19:50.672 [2024-07-25 12:01:36.620249] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:50.672 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 4491 00:19:50.672 [2024-07-25 12:01:36.646640] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.929 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.YlfWsNd0QE 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:19:50.930 00:19:50.930 real 0m7.270s 00:19:50.930 user 0m11.574s 00:19:50.930 sys 0m1.294s 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.930 12:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.930 ************************************ 00:19:50.930 END TEST raid_read_error_test 00:19:50.930 ************************************ 00:19:50.930 12:01:36 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:19:50.930 12:01:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:50.930 12:01:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.930 12:01:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.930 ************************************ 00:19:50.930 START TEST raid_write_error_test 00:19:50.930 ************************************ 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.2r6iCXe2I4 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=5826 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 5826 /var/tmp/spdk-raid.sock 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 5826 ']' 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:50.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.930 12:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.930 [2024-07-25 12:01:37.017128] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:50.930 [2024-07-25 12:01:37.017196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5826 ] 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:51.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.188 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:51.188 [2024-07-25 12:01:37.146670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.188 [2024-07-25 12:01:37.230916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.188 [2024-07-25 12:01:37.290571] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.188 [2024-07-25 12:01:37.290605] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.753 12:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.753 12:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:51.753 12:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:51.753 12:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:52.011 BaseBdev1_malloc 00:19:52.011 12:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:52.269 true 00:19:52.269 12:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:52.526 [2024-07-25 12:01:38.495983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:52.526 [2024-07-25 12:01:38.496021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.526 [2024-07-25 12:01:38.496039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ee6190 00:19:52.526 [2024-07-25 12:01:38.496051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.526 [2024-07-25 12:01:38.497592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.526 [2024-07-25 12:01:38.497619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:52.526 BaseBdev1 00:19:52.526 12:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:52.526 12:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:52.782 BaseBdev2_malloc 00:19:52.782 12:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:53.039 true 00:19:53.039 12:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:53.296 [2024-07-25 12:01:39.178003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:53.296 [2024-07-25 12:01:39.178041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.296 [2024-07-25 12:01:39.178059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eeae20 00:19:53.296 [2024-07-25 12:01:39.178070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.296 [2024-07-25 12:01:39.179467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.296 [2024-07-25 12:01:39.179494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:53.296 BaseBdev2 00:19:53.296 12:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:53.296 12:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:53.553 BaseBdev3_malloc 00:19:53.553 12:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:53.553 true 00:19:53.553 12:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:53.809 [2024-07-25 12:01:39.852271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:53.809 [2024-07-25 12:01:39.852312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.809 [2024-07-25 12:01:39.852332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eebd90 00:19:53.809 [2024-07-25 12:01:39.852343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.809 [2024-07-25 12:01:39.853731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.809 [2024-07-25 12:01:39.853757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:53.809 BaseBdev3 00:19:53.809 12:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:53.809 12:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:54.065 BaseBdev4_malloc 00:19:54.065 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:54.322 true 00:19:54.322 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:54.579 [2024-07-25 12:01:40.522233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:54.579 [2024-07-25 12:01:40.522271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.579 [2024-07-25 12:01:40.522294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eee000 00:19:54.579 [2024-07-25 12:01:40.522306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.579 [2024-07-25 12:01:40.523707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.579 [2024-07-25 12:01:40.523733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:54.579 BaseBdev4 00:19:54.579 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:54.836 [2024-07-25 12:01:40.738839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.836 [2024-07-25 12:01:40.739957] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.836 [2024-07-25 12:01:40.740020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.836 [2024-07-25 12:01:40.740072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:54.836 [2024-07-25 12:01:40.740294] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eeedd0 00:19:54.836 [2024-07-25 12:01:40.740305] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:54.836 [2024-07-25 12:01:40.740476] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ef0080 00:19:54.836 [2024-07-25 12:01:40.740611] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eeedd0 00:19:54.836 [2024-07-25 12:01:40.740620] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1eeedd0 00:19:54.836 [2024-07-25 12:01:40.740712] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.836 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.095 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.095 "name": "raid_bdev1", 00:19:55.095 "uuid": "d736a89d-529a-451e-970b-be6afd3ee587", 00:19:55.095 "strip_size_kb": 64, 00:19:55.095 "state": "online", 00:19:55.095 "raid_level": "concat", 00:19:55.095 "superblock": true, 00:19:55.095 "num_base_bdevs": 4, 00:19:55.095 "num_base_bdevs_discovered": 4, 00:19:55.095 "num_base_bdevs_operational": 4, 00:19:55.095 "base_bdevs_list": [ 00:19:55.095 { 00:19:55.095 "name": "BaseBdev1", 00:19:55.095 "uuid": "f72174aa-5ae5-57b6-86c5-4978df55d867", 00:19:55.095 "is_configured": true, 00:19:55.095 "data_offset": 2048, 00:19:55.095 "data_size": 63488 00:19:55.095 }, 00:19:55.095 { 00:19:55.095 "name": "BaseBdev2", 00:19:55.095 "uuid": "9e779a18-0285-5be4-a713-feb38e00afa0", 00:19:55.096 "is_configured": true, 00:19:55.096 "data_offset": 2048, 00:19:55.096 "data_size": 63488 00:19:55.096 }, 00:19:55.096 { 00:19:55.096 "name": "BaseBdev3", 00:19:55.096 "uuid": "bc207772-10d3-5f75-aeb8-b1b87fb4b6cf", 00:19:55.096 "is_configured": true, 00:19:55.096 "data_offset": 2048, 00:19:55.096 "data_size": 63488 00:19:55.096 }, 00:19:55.096 { 00:19:55.096 "name": "BaseBdev4", 00:19:55.096 "uuid": "f5ad9d9a-b1f1-5657-a180-e853c7e9f148", 00:19:55.096 "is_configured": true, 00:19:55.096 "data_offset": 2048, 00:19:55.096 "data_size": 63488 00:19:55.096 } 00:19:55.096 ] 00:19:55.096 }' 00:19:55.096 12:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.096 12:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.682 12:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:55.682 12:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:55.682 [2024-07-25 12:01:41.637455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ef3c70 00:19:56.615 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.872 12:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.129 12:01:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.129 "name": "raid_bdev1", 00:19:57.129 "uuid": "d736a89d-529a-451e-970b-be6afd3ee587", 00:19:57.129 "strip_size_kb": 64, 00:19:57.129 "state": "online", 00:19:57.129 "raid_level": "concat", 00:19:57.129 "superblock": true, 00:19:57.129 "num_base_bdevs": 4, 00:19:57.129 "num_base_bdevs_discovered": 4, 00:19:57.129 "num_base_bdevs_operational": 4, 00:19:57.129 "base_bdevs_list": [ 00:19:57.129 { 00:19:57.129 "name": "BaseBdev1", 00:19:57.129 "uuid": "f72174aa-5ae5-57b6-86c5-4978df55d867", 00:19:57.129 "is_configured": true, 00:19:57.129 "data_offset": 2048, 00:19:57.129 "data_size": 63488 00:19:57.129 }, 00:19:57.129 { 00:19:57.129 "name": "BaseBdev2", 00:19:57.129 "uuid": "9e779a18-0285-5be4-a713-feb38e00afa0", 00:19:57.129 "is_configured": true, 00:19:57.129 "data_offset": 2048, 00:19:57.129 "data_size": 63488 00:19:57.129 }, 00:19:57.129 { 00:19:57.129 "name": "BaseBdev3", 00:19:57.129 "uuid": "bc207772-10d3-5f75-aeb8-b1b87fb4b6cf", 00:19:57.129 "is_configured": true, 00:19:57.129 "data_offset": 2048, 00:19:57.129 "data_size": 63488 00:19:57.129 }, 00:19:57.129 { 00:19:57.129 "name": "BaseBdev4", 00:19:57.129 "uuid": "f5ad9d9a-b1f1-5657-a180-e853c7e9f148", 00:19:57.129 "is_configured": true, 00:19:57.129 "data_offset": 2048, 00:19:57.129 "data_size": 63488 00:19:57.129 } 00:19:57.129 ] 00:19:57.129 }' 00:19:57.129 12:01:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.129 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.693 12:01:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.693 [2024-07-25 12:01:43.801136] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.693 [2024-07-25 12:01:43.801183] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.693 [2024-07-25 12:01:43.804095] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.693 [2024-07-25 12:01:43.804130] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.693 [2024-07-25 12:01:43.804171] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.693 [2024-07-25 12:01:43.804181] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eeedd0 name raid_bdev1, state offline 00:19:57.693 0 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 5826 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 5826 ']' 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 5826 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 5826 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 5826' 00:19:57.950 killing process with pid 5826 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 5826 00:19:57.950 [2024-07-25 12:01:43.875448] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.950 12:01:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 5826 00:19:57.950 [2024-07-25 12:01:43.902481] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.2r6iCXe2I4 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:19:58.208 00:19:58.208 real 0m7.165s 00:19:58.208 user 0m11.333s 00:19:58.208 sys 0m1.318s 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.208 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.208 ************************************ 00:19:58.208 END TEST raid_write_error_test 00:19:58.208 ************************************ 00:19:58.208 12:01:44 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:58.208 12:01:44 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:58.208 12:01:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:58.208 12:01:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.208 12:01:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.208 ************************************ 00:19:58.208 START TEST raid_state_function_test 00:19:58.208 ************************************ 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=7080 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 7080' 00:19:58.208 Process raid pid: 7080 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 7080 /var/tmp/spdk-raid.sock 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 7080 ']' 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:58.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.208 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.208 [2024-07-25 12:01:44.266510] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:58.208 [2024-07-25 12:01:44.266567] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:58.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.466 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:58.466 [2024-07-25 12:01:44.399378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.466 [2024-07-25 12:01:44.485087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.466 [2024-07-25 12:01:44.549161] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.466 [2024-07-25 12:01:44.549197] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:59.397 [2024-07-25 12:01:45.367265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.397 [2024-07-25 12:01:45.367300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.397 [2024-07-25 12:01:45.367310] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.397 [2024-07-25 12:01:45.367321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.397 [2024-07-25 12:01:45.367329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:59.397 [2024-07-25 12:01:45.367339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:59.397 [2024-07-25 12:01:45.367347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:59.397 [2024-07-25 12:01:45.367357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.397 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.653 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.653 "name": "Existed_Raid", 00:19:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.653 "strip_size_kb": 0, 00:19:59.653 "state": "configuring", 00:19:59.653 "raid_level": "raid1", 00:19:59.653 "superblock": false, 00:19:59.653 "num_base_bdevs": 4, 00:19:59.653 "num_base_bdevs_discovered": 0, 00:19:59.653 "num_base_bdevs_operational": 4, 00:19:59.653 "base_bdevs_list": [ 00:19:59.653 { 00:19:59.653 "name": "BaseBdev1", 00:19:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.653 "is_configured": false, 00:19:59.653 "data_offset": 0, 00:19:59.653 "data_size": 0 00:19:59.653 }, 00:19:59.653 { 00:19:59.653 "name": "BaseBdev2", 00:19:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.653 "is_configured": false, 00:19:59.653 "data_offset": 0, 00:19:59.653 "data_size": 0 00:19:59.653 }, 00:19:59.653 { 00:19:59.653 "name": "BaseBdev3", 00:19:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.653 "is_configured": false, 00:19:59.653 "data_offset": 0, 00:19:59.653 "data_size": 0 00:19:59.653 }, 00:19:59.653 { 00:19:59.653 "name": "BaseBdev4", 00:19:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.653 "is_configured": false, 00:19:59.653 "data_offset": 0, 00:19:59.653 "data_size": 0 00:19:59.653 } 00:19:59.653 ] 00:19:59.653 }' 00:19:59.653 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.653 12:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.216 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:00.216 [2024-07-25 12:01:46.317567] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.216 [2024-07-25 12:01:46.317596] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2661f60 name Existed_Raid, state configuring 00:20:00.216 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:00.473 [2024-07-25 12:01:46.538166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.473 [2024-07-25 12:01:46.538192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.473 [2024-07-25 12:01:46.538201] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.473 [2024-07-25 12:01:46.538212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.473 [2024-07-25 12:01:46.538220] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.473 [2024-07-25 12:01:46.538229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.473 [2024-07-25 12:01:46.538237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:00.474 [2024-07-25 12:01:46.538247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:00.474 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:00.731 [2024-07-25 12:01:46.776398] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.731 BaseBdev1 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:00.731 12:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.988 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.244 [ 00:20:01.244 { 00:20:01.244 "name": "BaseBdev1", 00:20:01.244 "aliases": [ 00:20:01.244 "0b0645d2-93ea-4c52-8e9e-99a65a9de219" 00:20:01.244 ], 00:20:01.244 "product_name": "Malloc disk", 00:20:01.244 "block_size": 512, 00:20:01.244 "num_blocks": 65536, 00:20:01.244 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:01.244 "assigned_rate_limits": { 00:20:01.244 "rw_ios_per_sec": 0, 00:20:01.244 "rw_mbytes_per_sec": 0, 00:20:01.245 "r_mbytes_per_sec": 0, 00:20:01.245 "w_mbytes_per_sec": 0 00:20:01.245 }, 00:20:01.245 "claimed": true, 00:20:01.245 "claim_type": "exclusive_write", 00:20:01.245 "zoned": false, 00:20:01.245 "supported_io_types": { 00:20:01.245 "read": true, 00:20:01.245 "write": true, 00:20:01.245 "unmap": true, 00:20:01.245 "flush": true, 00:20:01.245 "reset": true, 00:20:01.245 "nvme_admin": false, 00:20:01.245 "nvme_io": false, 00:20:01.245 "nvme_io_md": false, 00:20:01.245 "write_zeroes": true, 00:20:01.245 "zcopy": true, 00:20:01.245 "get_zone_info": false, 00:20:01.245 "zone_management": false, 00:20:01.245 "zone_append": false, 00:20:01.245 "compare": false, 00:20:01.245 "compare_and_write": false, 00:20:01.245 "abort": true, 00:20:01.245 "seek_hole": false, 00:20:01.245 "seek_data": false, 00:20:01.245 "copy": true, 00:20:01.245 "nvme_iov_md": false 00:20:01.245 }, 00:20:01.245 "memory_domains": [ 00:20:01.245 { 00:20:01.245 "dma_device_id": "system", 00:20:01.245 "dma_device_type": 1 00:20:01.245 }, 00:20:01.245 { 00:20:01.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.245 "dma_device_type": 2 00:20:01.245 } 00:20:01.245 ], 00:20:01.245 "driver_specific": {} 00:20:01.245 } 00:20:01.245 ] 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.245 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.502 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.502 "name": "Existed_Raid", 00:20:01.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.502 "strip_size_kb": 0, 00:20:01.502 "state": "configuring", 00:20:01.502 "raid_level": "raid1", 00:20:01.502 "superblock": false, 00:20:01.502 "num_base_bdevs": 4, 00:20:01.502 "num_base_bdevs_discovered": 1, 00:20:01.502 "num_base_bdevs_operational": 4, 00:20:01.502 "base_bdevs_list": [ 00:20:01.502 { 00:20:01.502 "name": "BaseBdev1", 00:20:01.502 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:01.502 "is_configured": true, 00:20:01.502 "data_offset": 0, 00:20:01.502 "data_size": 65536 00:20:01.502 }, 00:20:01.502 { 00:20:01.502 "name": "BaseBdev2", 00:20:01.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.502 "is_configured": false, 00:20:01.502 "data_offset": 0, 00:20:01.502 "data_size": 0 00:20:01.502 }, 00:20:01.502 { 00:20:01.502 "name": "BaseBdev3", 00:20:01.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.502 "is_configured": false, 00:20:01.502 "data_offset": 0, 00:20:01.502 "data_size": 0 00:20:01.502 }, 00:20:01.502 { 00:20:01.502 "name": "BaseBdev4", 00:20:01.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.502 "is_configured": false, 00:20:01.502 "data_offset": 0, 00:20:01.502 "data_size": 0 00:20:01.502 } 00:20:01.502 ] 00:20:01.502 }' 00:20:01.502 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.502 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.066 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:02.323 [2024-07-25 12:01:48.252358] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:02.323 [2024-07-25 12:01:48.252396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26617d0 name Existed_Raid, state configuring 00:20:02.323 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:02.580 [2024-07-25 12:01:48.476980] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.580 [2024-07-25 12:01:48.478396] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.580 [2024-07-25 12:01:48.478427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.580 [2024-07-25 12:01:48.478436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:02.580 [2024-07-25 12:01:48.478447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:02.580 [2024-07-25 12:01:48.478455] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:02.580 [2024-07-25 12:01:48.478465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.580 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.838 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.838 "name": "Existed_Raid", 00:20:02.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.838 "strip_size_kb": 0, 00:20:02.838 "state": "configuring", 00:20:02.838 "raid_level": "raid1", 00:20:02.838 "superblock": false, 00:20:02.838 "num_base_bdevs": 4, 00:20:02.838 "num_base_bdevs_discovered": 1, 00:20:02.838 "num_base_bdevs_operational": 4, 00:20:02.838 "base_bdevs_list": [ 00:20:02.838 { 00:20:02.838 "name": "BaseBdev1", 00:20:02.838 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:02.838 "is_configured": true, 00:20:02.838 "data_offset": 0, 00:20:02.838 "data_size": 65536 00:20:02.838 }, 00:20:02.838 { 00:20:02.838 "name": "BaseBdev2", 00:20:02.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.838 "is_configured": false, 00:20:02.838 "data_offset": 0, 00:20:02.838 "data_size": 0 00:20:02.838 }, 00:20:02.838 { 00:20:02.838 "name": "BaseBdev3", 00:20:02.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.838 "is_configured": false, 00:20:02.838 "data_offset": 0, 00:20:02.838 "data_size": 0 00:20:02.838 }, 00:20:02.838 { 00:20:02.838 "name": "BaseBdev4", 00:20:02.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.838 "is_configured": false, 00:20:02.838 "data_offset": 0, 00:20:02.838 "data_size": 0 00:20:02.838 } 00:20:02.838 ] 00:20:02.838 }' 00:20:02.838 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.838 12:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.402 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:03.659 [2024-07-25 12:01:49.526978] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.659 BaseBdev2 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:03.659 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:03.916 [ 00:20:03.916 { 00:20:03.916 "name": "BaseBdev2", 00:20:03.916 "aliases": [ 00:20:03.916 "b0a5340f-0021-43f4-9a67-fb7d14f826d6" 00:20:03.916 ], 00:20:03.916 "product_name": "Malloc disk", 00:20:03.916 "block_size": 512, 00:20:03.916 "num_blocks": 65536, 00:20:03.916 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:03.916 "assigned_rate_limits": { 00:20:03.916 "rw_ios_per_sec": 0, 00:20:03.916 "rw_mbytes_per_sec": 0, 00:20:03.916 "r_mbytes_per_sec": 0, 00:20:03.916 "w_mbytes_per_sec": 0 00:20:03.916 }, 00:20:03.916 "claimed": true, 00:20:03.916 "claim_type": "exclusive_write", 00:20:03.916 "zoned": false, 00:20:03.916 "supported_io_types": { 00:20:03.916 "read": true, 00:20:03.916 "write": true, 00:20:03.916 "unmap": true, 00:20:03.916 "flush": true, 00:20:03.916 "reset": true, 00:20:03.916 "nvme_admin": false, 00:20:03.916 "nvme_io": false, 00:20:03.916 "nvme_io_md": false, 00:20:03.916 "write_zeroes": true, 00:20:03.916 "zcopy": true, 00:20:03.916 "get_zone_info": false, 00:20:03.916 "zone_management": false, 00:20:03.916 "zone_append": false, 00:20:03.916 "compare": false, 00:20:03.916 "compare_and_write": false, 00:20:03.916 "abort": true, 00:20:03.916 "seek_hole": false, 00:20:03.916 "seek_data": false, 00:20:03.916 "copy": true, 00:20:03.916 "nvme_iov_md": false 00:20:03.916 }, 00:20:03.916 "memory_domains": [ 00:20:03.916 { 00:20:03.916 "dma_device_id": "system", 00:20:03.916 "dma_device_type": 1 00:20:03.916 }, 00:20:03.916 { 00:20:03.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.916 "dma_device_type": 2 00:20:03.916 } 00:20:03.916 ], 00:20:03.916 "driver_specific": {} 00:20:03.916 } 00:20:03.916 ] 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.916 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.916 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.174 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.174 "name": "Existed_Raid", 00:20:04.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.174 "strip_size_kb": 0, 00:20:04.174 "state": "configuring", 00:20:04.174 "raid_level": "raid1", 00:20:04.174 "superblock": false, 00:20:04.174 "num_base_bdevs": 4, 00:20:04.174 "num_base_bdevs_discovered": 2, 00:20:04.174 "num_base_bdevs_operational": 4, 00:20:04.174 "base_bdevs_list": [ 00:20:04.174 { 00:20:04.174 "name": "BaseBdev1", 00:20:04.174 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:04.174 "is_configured": true, 00:20:04.174 "data_offset": 0, 00:20:04.174 "data_size": 65536 00:20:04.174 }, 00:20:04.174 { 00:20:04.174 "name": "BaseBdev2", 00:20:04.174 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:04.174 "is_configured": true, 00:20:04.174 "data_offset": 0, 00:20:04.174 "data_size": 65536 00:20:04.174 }, 00:20:04.174 { 00:20:04.174 "name": "BaseBdev3", 00:20:04.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.174 "is_configured": false, 00:20:04.174 "data_offset": 0, 00:20:04.174 "data_size": 0 00:20:04.174 }, 00:20:04.174 { 00:20:04.174 "name": "BaseBdev4", 00:20:04.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.174 "is_configured": false, 00:20:04.174 "data_offset": 0, 00:20:04.174 "data_size": 0 00:20:04.174 } 00:20:04.174 ] 00:20:04.174 }' 00:20:04.174 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.174 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.739 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:04.997 [2024-07-25 12:01:51.018019] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.997 BaseBdev3 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:04.997 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.255 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:05.512 [ 00:20:05.512 { 00:20:05.512 "name": "BaseBdev3", 00:20:05.512 "aliases": [ 00:20:05.512 "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020" 00:20:05.512 ], 00:20:05.512 "product_name": "Malloc disk", 00:20:05.512 "block_size": 512, 00:20:05.512 "num_blocks": 65536, 00:20:05.512 "uuid": "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020", 00:20:05.512 "assigned_rate_limits": { 00:20:05.512 "rw_ios_per_sec": 0, 00:20:05.512 "rw_mbytes_per_sec": 0, 00:20:05.512 "r_mbytes_per_sec": 0, 00:20:05.512 "w_mbytes_per_sec": 0 00:20:05.512 }, 00:20:05.512 "claimed": true, 00:20:05.512 "claim_type": "exclusive_write", 00:20:05.512 "zoned": false, 00:20:05.512 "supported_io_types": { 00:20:05.512 "read": true, 00:20:05.512 "write": true, 00:20:05.512 "unmap": true, 00:20:05.512 "flush": true, 00:20:05.512 "reset": true, 00:20:05.512 "nvme_admin": false, 00:20:05.512 "nvme_io": false, 00:20:05.512 "nvme_io_md": false, 00:20:05.512 "write_zeroes": true, 00:20:05.512 "zcopy": true, 00:20:05.512 "get_zone_info": false, 00:20:05.512 "zone_management": false, 00:20:05.512 "zone_append": false, 00:20:05.512 "compare": false, 00:20:05.512 "compare_and_write": false, 00:20:05.512 "abort": true, 00:20:05.512 "seek_hole": false, 00:20:05.512 "seek_data": false, 00:20:05.512 "copy": true, 00:20:05.512 "nvme_iov_md": false 00:20:05.512 }, 00:20:05.512 "memory_domains": [ 00:20:05.512 { 00:20:05.512 "dma_device_id": "system", 00:20:05.512 "dma_device_type": 1 00:20:05.512 }, 00:20:05.512 { 00:20:05.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.512 "dma_device_type": 2 00:20:05.512 } 00:20:05.512 ], 00:20:05.512 "driver_specific": {} 00:20:05.512 } 00:20:05.512 ] 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.512 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.770 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.770 "name": "Existed_Raid", 00:20:05.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.770 "strip_size_kb": 0, 00:20:05.770 "state": "configuring", 00:20:05.770 "raid_level": "raid1", 00:20:05.770 "superblock": false, 00:20:05.770 "num_base_bdevs": 4, 00:20:05.770 "num_base_bdevs_discovered": 3, 00:20:05.770 "num_base_bdevs_operational": 4, 00:20:05.770 "base_bdevs_list": [ 00:20:05.770 { 00:20:05.770 "name": "BaseBdev1", 00:20:05.770 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:05.770 "is_configured": true, 00:20:05.770 "data_offset": 0, 00:20:05.770 "data_size": 65536 00:20:05.770 }, 00:20:05.770 { 00:20:05.770 "name": "BaseBdev2", 00:20:05.770 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:05.770 "is_configured": true, 00:20:05.770 "data_offset": 0, 00:20:05.770 "data_size": 65536 00:20:05.770 }, 00:20:05.770 { 00:20:05.770 "name": "BaseBdev3", 00:20:05.770 "uuid": "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020", 00:20:05.770 "is_configured": true, 00:20:05.770 "data_offset": 0, 00:20:05.770 "data_size": 65536 00:20:05.770 }, 00:20:05.770 { 00:20:05.770 "name": "BaseBdev4", 00:20:05.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.770 "is_configured": false, 00:20:05.770 "data_offset": 0, 00:20:05.770 "data_size": 0 00:20:05.770 } 00:20:05.770 ] 00:20:05.770 }' 00:20:05.770 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.770 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.336 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:06.593 [2024-07-25 12:01:52.501097] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:06.593 [2024-07-25 12:01:52.501127] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2662830 00:20:06.593 [2024-07-25 12:01:52.501135] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:06.593 [2024-07-25 12:01:52.501321] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x265b280 00:20:06.593 [2024-07-25 12:01:52.501443] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2662830 00:20:06.593 [2024-07-25 12:01:52.501453] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2662830 00:20:06.593 [2024-07-25 12:01:52.501598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.593 BaseBdev4 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:06.593 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.850 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:06.850 [ 00:20:06.850 { 00:20:06.850 "name": "BaseBdev4", 00:20:06.850 "aliases": [ 00:20:06.850 "187359e8-af33-42b7-968b-2f9d1933142a" 00:20:06.850 ], 00:20:06.850 "product_name": "Malloc disk", 00:20:06.850 "block_size": 512, 00:20:06.850 "num_blocks": 65536, 00:20:06.850 "uuid": "187359e8-af33-42b7-968b-2f9d1933142a", 00:20:06.850 "assigned_rate_limits": { 00:20:06.850 "rw_ios_per_sec": 0, 00:20:06.850 "rw_mbytes_per_sec": 0, 00:20:06.850 "r_mbytes_per_sec": 0, 00:20:06.850 "w_mbytes_per_sec": 0 00:20:06.850 }, 00:20:06.850 "claimed": true, 00:20:06.850 "claim_type": "exclusive_write", 00:20:06.850 "zoned": false, 00:20:06.850 "supported_io_types": { 00:20:06.850 "read": true, 00:20:06.850 "write": true, 00:20:06.850 "unmap": true, 00:20:06.850 "flush": true, 00:20:06.850 "reset": true, 00:20:06.850 "nvme_admin": false, 00:20:06.850 "nvme_io": false, 00:20:06.850 "nvme_io_md": false, 00:20:06.850 "write_zeroes": true, 00:20:06.850 "zcopy": true, 00:20:06.850 "get_zone_info": false, 00:20:06.850 "zone_management": false, 00:20:06.850 "zone_append": false, 00:20:06.850 "compare": false, 00:20:06.850 "compare_and_write": false, 00:20:06.850 "abort": true, 00:20:06.850 "seek_hole": false, 00:20:06.850 "seek_data": false, 00:20:06.850 "copy": true, 00:20:06.850 "nvme_iov_md": false 00:20:06.850 }, 00:20:06.850 "memory_domains": [ 00:20:06.850 { 00:20:06.850 "dma_device_id": "system", 00:20:06.850 "dma_device_type": 1 00:20:06.850 }, 00:20:06.850 { 00:20:06.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.850 "dma_device_type": 2 00:20:06.850 } 00:20:06.850 ], 00:20:06.850 "driver_specific": {} 00:20:06.850 } 00:20:06.850 ] 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.108 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.108 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.108 "name": "Existed_Raid", 00:20:07.108 "uuid": "9353911e-3393-4938-8ac0-cc63512ce86a", 00:20:07.108 "strip_size_kb": 0, 00:20:07.108 "state": "online", 00:20:07.108 "raid_level": "raid1", 00:20:07.108 "superblock": false, 00:20:07.108 "num_base_bdevs": 4, 00:20:07.108 "num_base_bdevs_discovered": 4, 00:20:07.108 "num_base_bdevs_operational": 4, 00:20:07.108 "base_bdevs_list": [ 00:20:07.108 { 00:20:07.108 "name": "BaseBdev1", 00:20:07.108 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:07.108 "is_configured": true, 00:20:07.108 "data_offset": 0, 00:20:07.108 "data_size": 65536 00:20:07.108 }, 00:20:07.108 { 00:20:07.108 "name": "BaseBdev2", 00:20:07.108 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:07.108 "is_configured": true, 00:20:07.108 "data_offset": 0, 00:20:07.108 "data_size": 65536 00:20:07.108 }, 00:20:07.108 { 00:20:07.108 "name": "BaseBdev3", 00:20:07.108 "uuid": "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020", 00:20:07.108 "is_configured": true, 00:20:07.108 "data_offset": 0, 00:20:07.108 "data_size": 65536 00:20:07.108 }, 00:20:07.108 { 00:20:07.108 "name": "BaseBdev4", 00:20:07.108 "uuid": "187359e8-af33-42b7-968b-2f9d1933142a", 00:20:07.108 "is_configured": true, 00:20:07.108 "data_offset": 0, 00:20:07.108 "data_size": 65536 00:20:07.108 } 00:20:07.108 ] 00:20:07.108 }' 00:20:07.108 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.108 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:07.675 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:08.240 [2024-07-25 12:01:54.258031] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.240 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:08.240 "name": "Existed_Raid", 00:20:08.240 "aliases": [ 00:20:08.240 "9353911e-3393-4938-8ac0-cc63512ce86a" 00:20:08.240 ], 00:20:08.240 "product_name": "Raid Volume", 00:20:08.240 "block_size": 512, 00:20:08.240 "num_blocks": 65536, 00:20:08.240 "uuid": "9353911e-3393-4938-8ac0-cc63512ce86a", 00:20:08.240 "assigned_rate_limits": { 00:20:08.240 "rw_ios_per_sec": 0, 00:20:08.240 "rw_mbytes_per_sec": 0, 00:20:08.240 "r_mbytes_per_sec": 0, 00:20:08.240 "w_mbytes_per_sec": 0 00:20:08.240 }, 00:20:08.240 "claimed": false, 00:20:08.240 "zoned": false, 00:20:08.240 "supported_io_types": { 00:20:08.240 "read": true, 00:20:08.240 "write": true, 00:20:08.240 "unmap": false, 00:20:08.240 "flush": false, 00:20:08.240 "reset": true, 00:20:08.240 "nvme_admin": false, 00:20:08.240 "nvme_io": false, 00:20:08.240 "nvme_io_md": false, 00:20:08.240 "write_zeroes": true, 00:20:08.240 "zcopy": false, 00:20:08.240 "get_zone_info": false, 00:20:08.240 "zone_management": false, 00:20:08.240 "zone_append": false, 00:20:08.240 "compare": false, 00:20:08.240 "compare_and_write": false, 00:20:08.240 "abort": false, 00:20:08.240 "seek_hole": false, 00:20:08.240 "seek_data": false, 00:20:08.240 "copy": false, 00:20:08.240 "nvme_iov_md": false 00:20:08.240 }, 00:20:08.240 "memory_domains": [ 00:20:08.240 { 00:20:08.240 "dma_device_id": "system", 00:20:08.240 "dma_device_type": 1 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.240 "dma_device_type": 2 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "system", 00:20:08.240 "dma_device_type": 1 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.240 "dma_device_type": 2 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "system", 00:20:08.240 "dma_device_type": 1 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.240 "dma_device_type": 2 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "system", 00:20:08.240 "dma_device_type": 1 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.240 "dma_device_type": 2 00:20:08.240 } 00:20:08.240 ], 00:20:08.240 "driver_specific": { 00:20:08.240 "raid": { 00:20:08.240 "uuid": "9353911e-3393-4938-8ac0-cc63512ce86a", 00:20:08.240 "strip_size_kb": 0, 00:20:08.240 "state": "online", 00:20:08.240 "raid_level": "raid1", 00:20:08.240 "superblock": false, 00:20:08.240 "num_base_bdevs": 4, 00:20:08.240 "num_base_bdevs_discovered": 4, 00:20:08.240 "num_base_bdevs_operational": 4, 00:20:08.240 "base_bdevs_list": [ 00:20:08.240 { 00:20:08.240 "name": "BaseBdev1", 00:20:08.240 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:08.240 "is_configured": true, 00:20:08.240 "data_offset": 0, 00:20:08.240 "data_size": 65536 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "name": "BaseBdev2", 00:20:08.240 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:08.240 "is_configured": true, 00:20:08.240 "data_offset": 0, 00:20:08.240 "data_size": 65536 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "name": "BaseBdev3", 00:20:08.240 "uuid": "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020", 00:20:08.240 "is_configured": true, 00:20:08.240 "data_offset": 0, 00:20:08.240 "data_size": 65536 00:20:08.240 }, 00:20:08.240 { 00:20:08.240 "name": "BaseBdev4", 00:20:08.240 "uuid": "187359e8-af33-42b7-968b-2f9d1933142a", 00:20:08.240 "is_configured": true, 00:20:08.240 "data_offset": 0, 00:20:08.240 "data_size": 65536 00:20:08.240 } 00:20:08.240 ] 00:20:08.240 } 00:20:08.240 } 00:20:08.240 }' 00:20:08.240 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:08.240 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:08.240 BaseBdev2 00:20:08.240 BaseBdev3 00:20:08.240 BaseBdev4' 00:20:08.240 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.240 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:08.240 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.546 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.546 "name": "BaseBdev1", 00:20:08.546 "aliases": [ 00:20:08.546 "0b0645d2-93ea-4c52-8e9e-99a65a9de219" 00:20:08.546 ], 00:20:08.546 "product_name": "Malloc disk", 00:20:08.546 "block_size": 512, 00:20:08.546 "num_blocks": 65536, 00:20:08.546 "uuid": "0b0645d2-93ea-4c52-8e9e-99a65a9de219", 00:20:08.546 "assigned_rate_limits": { 00:20:08.546 "rw_ios_per_sec": 0, 00:20:08.546 "rw_mbytes_per_sec": 0, 00:20:08.546 "r_mbytes_per_sec": 0, 00:20:08.546 "w_mbytes_per_sec": 0 00:20:08.546 }, 00:20:08.546 "claimed": true, 00:20:08.546 "claim_type": "exclusive_write", 00:20:08.546 "zoned": false, 00:20:08.546 "supported_io_types": { 00:20:08.546 "read": true, 00:20:08.546 "write": true, 00:20:08.546 "unmap": true, 00:20:08.546 "flush": true, 00:20:08.546 "reset": true, 00:20:08.546 "nvme_admin": false, 00:20:08.546 "nvme_io": false, 00:20:08.546 "nvme_io_md": false, 00:20:08.546 "write_zeroes": true, 00:20:08.546 "zcopy": true, 00:20:08.546 "get_zone_info": false, 00:20:08.546 "zone_management": false, 00:20:08.546 "zone_append": false, 00:20:08.546 "compare": false, 00:20:08.546 "compare_and_write": false, 00:20:08.546 "abort": true, 00:20:08.546 "seek_hole": false, 00:20:08.546 "seek_data": false, 00:20:08.546 "copy": true, 00:20:08.546 "nvme_iov_md": false 00:20:08.546 }, 00:20:08.546 "memory_domains": [ 00:20:08.546 { 00:20:08.546 "dma_device_id": "system", 00:20:08.546 "dma_device_type": 1 00:20:08.546 }, 00:20:08.546 { 00:20:08.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.546 "dma_device_type": 2 00:20:08.546 } 00:20:08.546 ], 00:20:08.546 "driver_specific": {} 00:20:08.546 }' 00:20:08.546 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.546 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.546 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.546 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:08.823 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.080 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.080 "name": "BaseBdev2", 00:20:09.080 "aliases": [ 00:20:09.080 "b0a5340f-0021-43f4-9a67-fb7d14f826d6" 00:20:09.080 ], 00:20:09.080 "product_name": "Malloc disk", 00:20:09.080 "block_size": 512, 00:20:09.080 "num_blocks": 65536, 00:20:09.080 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:09.080 "assigned_rate_limits": { 00:20:09.080 "rw_ios_per_sec": 0, 00:20:09.080 "rw_mbytes_per_sec": 0, 00:20:09.080 "r_mbytes_per_sec": 0, 00:20:09.080 "w_mbytes_per_sec": 0 00:20:09.080 }, 00:20:09.080 "claimed": true, 00:20:09.080 "claim_type": "exclusive_write", 00:20:09.080 "zoned": false, 00:20:09.080 "supported_io_types": { 00:20:09.080 "read": true, 00:20:09.080 "write": true, 00:20:09.080 "unmap": true, 00:20:09.080 "flush": true, 00:20:09.080 "reset": true, 00:20:09.080 "nvme_admin": false, 00:20:09.080 "nvme_io": false, 00:20:09.080 "nvme_io_md": false, 00:20:09.080 "write_zeroes": true, 00:20:09.080 "zcopy": true, 00:20:09.080 "get_zone_info": false, 00:20:09.081 "zone_management": false, 00:20:09.081 "zone_append": false, 00:20:09.081 "compare": false, 00:20:09.081 "compare_and_write": false, 00:20:09.081 "abort": true, 00:20:09.081 "seek_hole": false, 00:20:09.081 "seek_data": false, 00:20:09.081 "copy": true, 00:20:09.081 "nvme_iov_md": false 00:20:09.081 }, 00:20:09.081 "memory_domains": [ 00:20:09.081 { 00:20:09.081 "dma_device_id": "system", 00:20:09.081 "dma_device_type": 1 00:20:09.081 }, 00:20:09.081 { 00:20:09.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.081 "dma_device_type": 2 00:20:09.081 } 00:20:09.081 ], 00:20:09.081 "driver_specific": {} 00:20:09.081 }' 00:20:09.081 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.081 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.081 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:09.081 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.081 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:09.337 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.594 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.594 "name": "BaseBdev3", 00:20:09.594 "aliases": [ 00:20:09.594 "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020" 00:20:09.594 ], 00:20:09.594 "product_name": "Malloc disk", 00:20:09.594 "block_size": 512, 00:20:09.594 "num_blocks": 65536, 00:20:09.594 "uuid": "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020", 00:20:09.594 "assigned_rate_limits": { 00:20:09.594 "rw_ios_per_sec": 0, 00:20:09.594 "rw_mbytes_per_sec": 0, 00:20:09.594 "r_mbytes_per_sec": 0, 00:20:09.594 "w_mbytes_per_sec": 0 00:20:09.594 }, 00:20:09.594 "claimed": true, 00:20:09.594 "claim_type": "exclusive_write", 00:20:09.594 "zoned": false, 00:20:09.594 "supported_io_types": { 00:20:09.594 "read": true, 00:20:09.594 "write": true, 00:20:09.594 "unmap": true, 00:20:09.594 "flush": true, 00:20:09.594 "reset": true, 00:20:09.594 "nvme_admin": false, 00:20:09.594 "nvme_io": false, 00:20:09.594 "nvme_io_md": false, 00:20:09.594 "write_zeroes": true, 00:20:09.594 "zcopy": true, 00:20:09.594 "get_zone_info": false, 00:20:09.594 "zone_management": false, 00:20:09.594 "zone_append": false, 00:20:09.594 "compare": false, 00:20:09.594 "compare_and_write": false, 00:20:09.594 "abort": true, 00:20:09.594 "seek_hole": false, 00:20:09.594 "seek_data": false, 00:20:09.594 "copy": true, 00:20:09.594 "nvme_iov_md": false 00:20:09.594 }, 00:20:09.594 "memory_domains": [ 00:20:09.594 { 00:20:09.594 "dma_device_id": "system", 00:20:09.594 "dma_device_type": 1 00:20:09.594 }, 00:20:09.594 { 00:20:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.594 "dma_device_type": 2 00:20:09.594 } 00:20:09.594 ], 00:20:09.594 "driver_specific": {} 00:20:09.594 }' 00:20:09.594 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.594 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:09.851 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.109 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.109 "name": "BaseBdev4", 00:20:10.109 "aliases": [ 00:20:10.109 "187359e8-af33-42b7-968b-2f9d1933142a" 00:20:10.109 ], 00:20:10.109 "product_name": "Malloc disk", 00:20:10.109 "block_size": 512, 00:20:10.109 "num_blocks": 65536, 00:20:10.109 "uuid": "187359e8-af33-42b7-968b-2f9d1933142a", 00:20:10.109 "assigned_rate_limits": { 00:20:10.109 "rw_ios_per_sec": 0, 00:20:10.109 "rw_mbytes_per_sec": 0, 00:20:10.109 "r_mbytes_per_sec": 0, 00:20:10.109 "w_mbytes_per_sec": 0 00:20:10.109 }, 00:20:10.109 "claimed": true, 00:20:10.109 "claim_type": "exclusive_write", 00:20:10.109 "zoned": false, 00:20:10.109 "supported_io_types": { 00:20:10.109 "read": true, 00:20:10.109 "write": true, 00:20:10.109 "unmap": true, 00:20:10.109 "flush": true, 00:20:10.109 "reset": true, 00:20:10.109 "nvme_admin": false, 00:20:10.109 "nvme_io": false, 00:20:10.109 "nvme_io_md": false, 00:20:10.109 "write_zeroes": true, 00:20:10.109 "zcopy": true, 00:20:10.109 "get_zone_info": false, 00:20:10.109 "zone_management": false, 00:20:10.109 "zone_append": false, 00:20:10.109 "compare": false, 00:20:10.109 "compare_and_write": false, 00:20:10.109 "abort": true, 00:20:10.109 "seek_hole": false, 00:20:10.109 "seek_data": false, 00:20:10.109 "copy": true, 00:20:10.109 "nvme_iov_md": false 00:20:10.109 }, 00:20:10.109 "memory_domains": [ 00:20:10.109 { 00:20:10.109 "dma_device_id": "system", 00:20:10.109 "dma_device_type": 1 00:20:10.109 }, 00:20:10.109 { 00:20:10.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.109 "dma_device_type": 2 00:20:10.109 } 00:20:10.109 ], 00:20:10.109 "driver_specific": {} 00:20:10.109 }' 00:20:10.109 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.367 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:10.624 [2024-07-25 12:01:56.708300] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.624 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.881 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.881 "name": "Existed_Raid", 00:20:10.881 "uuid": "9353911e-3393-4938-8ac0-cc63512ce86a", 00:20:10.881 "strip_size_kb": 0, 00:20:10.881 "state": "online", 00:20:10.881 "raid_level": "raid1", 00:20:10.881 "superblock": false, 00:20:10.881 "num_base_bdevs": 4, 00:20:10.881 "num_base_bdevs_discovered": 3, 00:20:10.881 "num_base_bdevs_operational": 3, 00:20:10.881 "base_bdevs_list": [ 00:20:10.881 { 00:20:10.881 "name": null, 00:20:10.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.881 "is_configured": false, 00:20:10.881 "data_offset": 0, 00:20:10.881 "data_size": 65536 00:20:10.881 }, 00:20:10.881 { 00:20:10.881 "name": "BaseBdev2", 00:20:10.881 "uuid": "b0a5340f-0021-43f4-9a67-fb7d14f826d6", 00:20:10.881 "is_configured": true, 00:20:10.881 "data_offset": 0, 00:20:10.881 "data_size": 65536 00:20:10.881 }, 00:20:10.881 { 00:20:10.881 "name": "BaseBdev3", 00:20:10.881 "uuid": "9c5f94e1-3a1c-44e2-a9cd-dfa4277b4020", 00:20:10.881 "is_configured": true, 00:20:10.881 "data_offset": 0, 00:20:10.881 "data_size": 65536 00:20:10.881 }, 00:20:10.881 { 00:20:10.881 "name": "BaseBdev4", 00:20:10.881 "uuid": "187359e8-af33-42b7-968b-2f9d1933142a", 00:20:10.881 "is_configured": true, 00:20:10.881 "data_offset": 0, 00:20:10.881 "data_size": 65536 00:20:10.881 } 00:20:10.881 ] 00:20:10.881 }' 00:20:10.881 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.881 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.445 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:11.445 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:11.445 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.445 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:11.702 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:11.702 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.702 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:11.959 [2024-07-25 12:01:57.872417] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:11.959 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:11.959 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:11.959 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:11.959 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.216 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:12.216 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.216 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:12.473 [2024-07-25 12:01:58.343505] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.473 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:12.473 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.473 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.473 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:12.731 [2024-07-25 12:01:58.806761] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:12.731 [2024-07-25 12:01:58.806830] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.731 [2024-07-25 12:01:58.817082] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.731 [2024-07-25 12:01:58.817111] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.731 [2024-07-25 12:01:58.817127] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2662830 name Existed_Raid, state offline 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.731 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:12.988 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:12.988 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:12.988 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:12.988 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:12.988 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:12.988 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.245 BaseBdev2 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:13.245 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.515 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.772 [ 00:20:13.772 { 00:20:13.772 "name": "BaseBdev2", 00:20:13.772 "aliases": [ 00:20:13.772 "ce7b4dbd-0d7b-47c0-957b-326905120fa4" 00:20:13.772 ], 00:20:13.772 "product_name": "Malloc disk", 00:20:13.772 "block_size": 512, 00:20:13.772 "num_blocks": 65536, 00:20:13.772 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:13.772 "assigned_rate_limits": { 00:20:13.772 "rw_ios_per_sec": 0, 00:20:13.772 "rw_mbytes_per_sec": 0, 00:20:13.772 "r_mbytes_per_sec": 0, 00:20:13.772 "w_mbytes_per_sec": 0 00:20:13.772 }, 00:20:13.772 "claimed": false, 00:20:13.772 "zoned": false, 00:20:13.772 "supported_io_types": { 00:20:13.772 "read": true, 00:20:13.772 "write": true, 00:20:13.772 "unmap": true, 00:20:13.772 "flush": true, 00:20:13.772 "reset": true, 00:20:13.772 "nvme_admin": false, 00:20:13.772 "nvme_io": false, 00:20:13.772 "nvme_io_md": false, 00:20:13.772 "write_zeroes": true, 00:20:13.772 "zcopy": true, 00:20:13.772 "get_zone_info": false, 00:20:13.772 "zone_management": false, 00:20:13.772 "zone_append": false, 00:20:13.772 "compare": false, 00:20:13.772 "compare_and_write": false, 00:20:13.772 "abort": true, 00:20:13.772 "seek_hole": false, 00:20:13.772 "seek_data": false, 00:20:13.772 "copy": true, 00:20:13.772 "nvme_iov_md": false 00:20:13.772 }, 00:20:13.772 "memory_domains": [ 00:20:13.772 { 00:20:13.772 "dma_device_id": "system", 00:20:13.772 "dma_device_type": 1 00:20:13.772 }, 00:20:13.772 { 00:20:13.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.772 "dma_device_type": 2 00:20:13.772 } 00:20:13.772 ], 00:20:13.772 "driver_specific": {} 00:20:13.772 } 00:20:13.772 ] 00:20:13.772 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:13.772 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:13.772 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:13.772 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.028 BaseBdev3 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:14.028 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.284 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:14.541 [ 00:20:14.541 { 00:20:14.541 "name": "BaseBdev3", 00:20:14.541 "aliases": [ 00:20:14.541 "8421a373-313b-4341-b73c-2d8c48ce64dd" 00:20:14.541 ], 00:20:14.541 "product_name": "Malloc disk", 00:20:14.541 "block_size": 512, 00:20:14.541 "num_blocks": 65536, 00:20:14.541 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:14.541 "assigned_rate_limits": { 00:20:14.541 "rw_ios_per_sec": 0, 00:20:14.541 "rw_mbytes_per_sec": 0, 00:20:14.541 "r_mbytes_per_sec": 0, 00:20:14.541 "w_mbytes_per_sec": 0 00:20:14.541 }, 00:20:14.541 "claimed": false, 00:20:14.541 "zoned": false, 00:20:14.541 "supported_io_types": { 00:20:14.541 "read": true, 00:20:14.541 "write": true, 00:20:14.541 "unmap": true, 00:20:14.541 "flush": true, 00:20:14.541 "reset": true, 00:20:14.541 "nvme_admin": false, 00:20:14.541 "nvme_io": false, 00:20:14.541 "nvme_io_md": false, 00:20:14.541 "write_zeroes": true, 00:20:14.541 "zcopy": true, 00:20:14.541 "get_zone_info": false, 00:20:14.541 "zone_management": false, 00:20:14.541 "zone_append": false, 00:20:14.541 "compare": false, 00:20:14.541 "compare_and_write": false, 00:20:14.541 "abort": true, 00:20:14.541 "seek_hole": false, 00:20:14.541 "seek_data": false, 00:20:14.541 "copy": true, 00:20:14.541 "nvme_iov_md": false 00:20:14.541 }, 00:20:14.541 "memory_domains": [ 00:20:14.541 { 00:20:14.541 "dma_device_id": "system", 00:20:14.541 "dma_device_type": 1 00:20:14.541 }, 00:20:14.541 { 00:20:14.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.541 "dma_device_type": 2 00:20:14.541 } 00:20:14.541 ], 00:20:14.541 "driver_specific": {} 00:20:14.541 } 00:20:14.541 ] 00:20:14.541 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:14.541 12:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:14.541 12:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:14.541 12:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:14.541 BaseBdev4 00:20:14.541 12:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.798 12:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:15.054 [ 00:20:15.054 { 00:20:15.054 "name": "BaseBdev4", 00:20:15.054 "aliases": [ 00:20:15.054 "c7ced2e9-032c-4475-9f5c-7803e3626110" 00:20:15.054 ], 00:20:15.054 "product_name": "Malloc disk", 00:20:15.054 "block_size": 512, 00:20:15.054 "num_blocks": 65536, 00:20:15.054 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:15.054 "assigned_rate_limits": { 00:20:15.054 "rw_ios_per_sec": 0, 00:20:15.054 "rw_mbytes_per_sec": 0, 00:20:15.054 "r_mbytes_per_sec": 0, 00:20:15.054 "w_mbytes_per_sec": 0 00:20:15.054 }, 00:20:15.054 "claimed": false, 00:20:15.054 "zoned": false, 00:20:15.054 "supported_io_types": { 00:20:15.054 "read": true, 00:20:15.054 "write": true, 00:20:15.054 "unmap": true, 00:20:15.054 "flush": true, 00:20:15.054 "reset": true, 00:20:15.054 "nvme_admin": false, 00:20:15.054 "nvme_io": false, 00:20:15.054 "nvme_io_md": false, 00:20:15.054 "write_zeroes": true, 00:20:15.054 "zcopy": true, 00:20:15.054 "get_zone_info": false, 00:20:15.054 "zone_management": false, 00:20:15.054 "zone_append": false, 00:20:15.054 "compare": false, 00:20:15.054 "compare_and_write": false, 00:20:15.054 "abort": true, 00:20:15.054 "seek_hole": false, 00:20:15.054 "seek_data": false, 00:20:15.054 "copy": true, 00:20:15.054 "nvme_iov_md": false 00:20:15.054 }, 00:20:15.054 "memory_domains": [ 00:20:15.054 { 00:20:15.054 "dma_device_id": "system", 00:20:15.055 "dma_device_type": 1 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.055 "dma_device_type": 2 00:20:15.055 } 00:20:15.055 ], 00:20:15.055 "driver_specific": {} 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 12:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:15.055 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:15.055 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:15.055 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:15.311 [2024-07-25 12:02:01.320262] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.311 [2024-07-25 12:02:01.320297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.311 [2024-07-25 12:02:01.320314] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.311 [2024-07-25 12:02:01.321541] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:15.311 [2024-07-25 12:02:01.321581] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.311 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.568 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.568 "name": "Existed_Raid", 00:20:15.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.568 "strip_size_kb": 0, 00:20:15.568 "state": "configuring", 00:20:15.568 "raid_level": "raid1", 00:20:15.568 "superblock": false, 00:20:15.568 "num_base_bdevs": 4, 00:20:15.568 "num_base_bdevs_discovered": 3, 00:20:15.568 "num_base_bdevs_operational": 4, 00:20:15.568 "base_bdevs_list": [ 00:20:15.568 { 00:20:15.568 "name": "BaseBdev1", 00:20:15.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.568 "is_configured": false, 00:20:15.568 "data_offset": 0, 00:20:15.568 "data_size": 0 00:20:15.568 }, 00:20:15.568 { 00:20:15.568 "name": "BaseBdev2", 00:20:15.568 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:15.568 "is_configured": true, 00:20:15.568 "data_offset": 0, 00:20:15.568 "data_size": 65536 00:20:15.568 }, 00:20:15.568 { 00:20:15.568 "name": "BaseBdev3", 00:20:15.568 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:15.568 "is_configured": true, 00:20:15.568 "data_offset": 0, 00:20:15.568 "data_size": 65536 00:20:15.568 }, 00:20:15.568 { 00:20:15.568 "name": "BaseBdev4", 00:20:15.568 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:15.568 "is_configured": true, 00:20:15.568 "data_offset": 0, 00:20:15.568 "data_size": 65536 00:20:15.568 } 00:20:15.568 ] 00:20:15.568 }' 00:20:15.568 12:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.568 12:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.131 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:16.389 [2024-07-25 12:02:02.330905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.389 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.646 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.646 "name": "Existed_Raid", 00:20:16.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.646 "strip_size_kb": 0, 00:20:16.646 "state": "configuring", 00:20:16.646 "raid_level": "raid1", 00:20:16.646 "superblock": false, 00:20:16.646 "num_base_bdevs": 4, 00:20:16.646 "num_base_bdevs_discovered": 2, 00:20:16.646 "num_base_bdevs_operational": 4, 00:20:16.646 "base_bdevs_list": [ 00:20:16.646 { 00:20:16.646 "name": "BaseBdev1", 00:20:16.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.646 "is_configured": false, 00:20:16.646 "data_offset": 0, 00:20:16.646 "data_size": 0 00:20:16.646 }, 00:20:16.646 { 00:20:16.646 "name": null, 00:20:16.646 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:16.646 "is_configured": false, 00:20:16.646 "data_offset": 0, 00:20:16.646 "data_size": 65536 00:20:16.646 }, 00:20:16.646 { 00:20:16.646 "name": "BaseBdev3", 00:20:16.646 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:16.646 "is_configured": true, 00:20:16.646 "data_offset": 0, 00:20:16.646 "data_size": 65536 00:20:16.646 }, 00:20:16.646 { 00:20:16.646 "name": "BaseBdev4", 00:20:16.646 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:16.646 "is_configured": true, 00:20:16.646 "data_offset": 0, 00:20:16.646 "data_size": 65536 00:20:16.646 } 00:20:16.646 ] 00:20:16.646 }' 00:20:16.646 12:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.646 12:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.210 12:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.210 12:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.466 12:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:17.466 12:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:17.723 [2024-07-25 12:02:03.597457] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.723 BaseBdev1 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:17.723 12:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:17.980 [ 00:20:17.981 { 00:20:17.981 "name": "BaseBdev1", 00:20:17.981 "aliases": [ 00:20:17.981 "b77d573a-47b7-40d6-8f92-60a2632582b7" 00:20:17.981 ], 00:20:17.981 "product_name": "Malloc disk", 00:20:17.981 "block_size": 512, 00:20:17.981 "num_blocks": 65536, 00:20:17.981 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:17.981 "assigned_rate_limits": { 00:20:17.981 "rw_ios_per_sec": 0, 00:20:17.981 "rw_mbytes_per_sec": 0, 00:20:17.981 "r_mbytes_per_sec": 0, 00:20:17.981 "w_mbytes_per_sec": 0 00:20:17.981 }, 00:20:17.981 "claimed": true, 00:20:17.981 "claim_type": "exclusive_write", 00:20:17.981 "zoned": false, 00:20:17.981 "supported_io_types": { 00:20:17.981 "read": true, 00:20:17.981 "write": true, 00:20:17.981 "unmap": true, 00:20:17.981 "flush": true, 00:20:17.981 "reset": true, 00:20:17.981 "nvme_admin": false, 00:20:17.981 "nvme_io": false, 00:20:17.981 "nvme_io_md": false, 00:20:17.981 "write_zeroes": true, 00:20:17.981 "zcopy": true, 00:20:17.981 "get_zone_info": false, 00:20:17.981 "zone_management": false, 00:20:17.981 "zone_append": false, 00:20:17.981 "compare": false, 00:20:17.981 "compare_and_write": false, 00:20:17.981 "abort": true, 00:20:17.981 "seek_hole": false, 00:20:17.981 "seek_data": false, 00:20:17.981 "copy": true, 00:20:17.981 "nvme_iov_md": false 00:20:17.981 }, 00:20:17.981 "memory_domains": [ 00:20:17.981 { 00:20:17.981 "dma_device_id": "system", 00:20:17.981 "dma_device_type": 1 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.981 "dma_device_type": 2 00:20:17.981 } 00:20:17.981 ], 00:20:17.981 "driver_specific": {} 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.981 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.238 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.238 "name": "Existed_Raid", 00:20:18.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.238 "strip_size_kb": 0, 00:20:18.238 "state": "configuring", 00:20:18.238 "raid_level": "raid1", 00:20:18.238 "superblock": false, 00:20:18.238 "num_base_bdevs": 4, 00:20:18.238 "num_base_bdevs_discovered": 3, 00:20:18.238 "num_base_bdevs_operational": 4, 00:20:18.238 "base_bdevs_list": [ 00:20:18.238 { 00:20:18.238 "name": "BaseBdev1", 00:20:18.238 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:18.238 "is_configured": true, 00:20:18.238 "data_offset": 0, 00:20:18.238 "data_size": 65536 00:20:18.238 }, 00:20:18.238 { 00:20:18.238 "name": null, 00:20:18.238 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:18.238 "is_configured": false, 00:20:18.238 "data_offset": 0, 00:20:18.238 "data_size": 65536 00:20:18.238 }, 00:20:18.238 { 00:20:18.238 "name": "BaseBdev3", 00:20:18.238 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:18.238 "is_configured": true, 00:20:18.238 "data_offset": 0, 00:20:18.238 "data_size": 65536 00:20:18.238 }, 00:20:18.238 { 00:20:18.238 "name": "BaseBdev4", 00:20:18.238 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:18.238 "is_configured": true, 00:20:18.238 "data_offset": 0, 00:20:18.238 "data_size": 65536 00:20:18.238 } 00:20:18.238 ] 00:20:18.238 }' 00:20:18.238 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.238 12:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.802 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.802 12:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:19.059 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:19.059 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:19.316 [2024-07-25 12:02:05.330027] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.316 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.574 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.574 "name": "Existed_Raid", 00:20:19.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.574 "strip_size_kb": 0, 00:20:19.574 "state": "configuring", 00:20:19.574 "raid_level": "raid1", 00:20:19.574 "superblock": false, 00:20:19.574 "num_base_bdevs": 4, 00:20:19.574 "num_base_bdevs_discovered": 2, 00:20:19.574 "num_base_bdevs_operational": 4, 00:20:19.574 "base_bdevs_list": [ 00:20:19.574 { 00:20:19.574 "name": "BaseBdev1", 00:20:19.574 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:19.574 "is_configured": true, 00:20:19.574 "data_offset": 0, 00:20:19.574 "data_size": 65536 00:20:19.574 }, 00:20:19.574 { 00:20:19.574 "name": null, 00:20:19.574 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:19.574 "is_configured": false, 00:20:19.574 "data_offset": 0, 00:20:19.574 "data_size": 65536 00:20:19.574 }, 00:20:19.574 { 00:20:19.574 "name": null, 00:20:19.574 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:19.574 "is_configured": false, 00:20:19.574 "data_offset": 0, 00:20:19.574 "data_size": 65536 00:20:19.574 }, 00:20:19.574 { 00:20:19.574 "name": "BaseBdev4", 00:20:19.574 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:19.574 "is_configured": true, 00:20:19.574 "data_offset": 0, 00:20:19.574 "data_size": 65536 00:20:19.574 } 00:20:19.574 ] 00:20:19.574 }' 00:20:19.574 12:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.574 12:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.139 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:20.139 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.397 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:20.397 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:20.654 [2024-07-25 12:02:06.549250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.654 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.655 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.655 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.655 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.655 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.912 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.912 "name": "Existed_Raid", 00:20:20.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.912 "strip_size_kb": 0, 00:20:20.912 "state": "configuring", 00:20:20.912 "raid_level": "raid1", 00:20:20.912 "superblock": false, 00:20:20.912 "num_base_bdevs": 4, 00:20:20.912 "num_base_bdevs_discovered": 3, 00:20:20.912 "num_base_bdevs_operational": 4, 00:20:20.912 "base_bdevs_list": [ 00:20:20.912 { 00:20:20.912 "name": "BaseBdev1", 00:20:20.912 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:20.912 "is_configured": true, 00:20:20.912 "data_offset": 0, 00:20:20.912 "data_size": 65536 00:20:20.912 }, 00:20:20.912 { 00:20:20.912 "name": null, 00:20:20.912 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:20.912 "is_configured": false, 00:20:20.912 "data_offset": 0, 00:20:20.912 "data_size": 65536 00:20:20.912 }, 00:20:20.912 { 00:20:20.912 "name": "BaseBdev3", 00:20:20.912 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:20.912 "is_configured": true, 00:20:20.912 "data_offset": 0, 00:20:20.912 "data_size": 65536 00:20:20.912 }, 00:20:20.912 { 00:20:20.912 "name": "BaseBdev4", 00:20:20.912 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:20.912 "is_configured": true, 00:20:20.912 "data_offset": 0, 00:20:20.912 "data_size": 65536 00:20:20.912 } 00:20:20.912 ] 00:20:20.912 }' 00:20:20.912 12:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.912 12:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.511 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:21.511 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.511 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:21.511 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:21.768 [2024-07-25 12:02:07.784520] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.768 12:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.025 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.025 "name": "Existed_Raid", 00:20:22.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.025 "strip_size_kb": 0, 00:20:22.025 "state": "configuring", 00:20:22.025 "raid_level": "raid1", 00:20:22.025 "superblock": false, 00:20:22.025 "num_base_bdevs": 4, 00:20:22.025 "num_base_bdevs_discovered": 2, 00:20:22.025 "num_base_bdevs_operational": 4, 00:20:22.025 "base_bdevs_list": [ 00:20:22.025 { 00:20:22.025 "name": null, 00:20:22.025 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:22.025 "is_configured": false, 00:20:22.025 "data_offset": 0, 00:20:22.025 "data_size": 65536 00:20:22.025 }, 00:20:22.025 { 00:20:22.025 "name": null, 00:20:22.025 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:22.025 "is_configured": false, 00:20:22.025 "data_offset": 0, 00:20:22.025 "data_size": 65536 00:20:22.025 }, 00:20:22.025 { 00:20:22.025 "name": "BaseBdev3", 00:20:22.025 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:22.025 "is_configured": true, 00:20:22.025 "data_offset": 0, 00:20:22.025 "data_size": 65536 00:20:22.025 }, 00:20:22.025 { 00:20:22.025 "name": "BaseBdev4", 00:20:22.025 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:22.025 "is_configured": true, 00:20:22.025 "data_offset": 0, 00:20:22.025 "data_size": 65536 00:20:22.025 } 00:20:22.025 ] 00:20:22.025 }' 00:20:22.025 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.025 12:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.589 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.589 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:22.846 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:22.846 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:23.102 [2024-07-25 12:02:08.977873] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.102 12:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.102 12:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.102 12:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.358 12:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.358 "name": "Existed_Raid", 00:20:23.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.358 "strip_size_kb": 0, 00:20:23.358 "state": "configuring", 00:20:23.358 "raid_level": "raid1", 00:20:23.358 "superblock": false, 00:20:23.358 "num_base_bdevs": 4, 00:20:23.358 "num_base_bdevs_discovered": 3, 00:20:23.358 "num_base_bdevs_operational": 4, 00:20:23.358 "base_bdevs_list": [ 00:20:23.358 { 00:20:23.358 "name": null, 00:20:23.358 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:23.358 "is_configured": false, 00:20:23.358 "data_offset": 0, 00:20:23.358 "data_size": 65536 00:20:23.358 }, 00:20:23.358 { 00:20:23.358 "name": "BaseBdev2", 00:20:23.358 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:23.358 "is_configured": true, 00:20:23.358 "data_offset": 0, 00:20:23.358 "data_size": 65536 00:20:23.358 }, 00:20:23.358 { 00:20:23.358 "name": "BaseBdev3", 00:20:23.358 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:23.358 "is_configured": true, 00:20:23.358 "data_offset": 0, 00:20:23.358 "data_size": 65536 00:20:23.358 }, 00:20:23.358 { 00:20:23.358 "name": "BaseBdev4", 00:20:23.358 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:23.358 "is_configured": true, 00:20:23.358 "data_offset": 0, 00:20:23.358 "data_size": 65536 00:20:23.358 } 00:20:23.358 ] 00:20:23.358 }' 00:20:23.358 12:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.358 12:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.923 12:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.923 12:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:23.923 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:23.923 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:23.923 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.180 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b77d573a-47b7-40d6-8f92-60a2632582b7 00:20:24.437 [2024-07-25 12:02:10.492905] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:24.437 [2024-07-25 12:02:10.492936] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x265ad30 00:20:24.437 [2024-07-25 12:02:10.492944] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:24.437 [2024-07-25 12:02:10.493118] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2813e30 00:20:24.437 [2024-07-25 12:02:10.493242] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x265ad30 00:20:24.437 [2024-07-25 12:02:10.493252] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x265ad30 00:20:24.437 [2024-07-25 12:02:10.493396] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.437 NewBaseBdev 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:24.437 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.694 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:24.952 [ 00:20:24.952 { 00:20:24.952 "name": "NewBaseBdev", 00:20:24.952 "aliases": [ 00:20:24.952 "b77d573a-47b7-40d6-8f92-60a2632582b7" 00:20:24.952 ], 00:20:24.952 "product_name": "Malloc disk", 00:20:24.952 "block_size": 512, 00:20:24.952 "num_blocks": 65536, 00:20:24.952 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:24.952 "assigned_rate_limits": { 00:20:24.952 "rw_ios_per_sec": 0, 00:20:24.952 "rw_mbytes_per_sec": 0, 00:20:24.952 "r_mbytes_per_sec": 0, 00:20:24.952 "w_mbytes_per_sec": 0 00:20:24.952 }, 00:20:24.952 "claimed": true, 00:20:24.952 "claim_type": "exclusive_write", 00:20:24.952 "zoned": false, 00:20:24.952 "supported_io_types": { 00:20:24.952 "read": true, 00:20:24.952 "write": true, 00:20:24.952 "unmap": true, 00:20:24.952 "flush": true, 00:20:24.952 "reset": true, 00:20:24.952 "nvme_admin": false, 00:20:24.952 "nvme_io": false, 00:20:24.952 "nvme_io_md": false, 00:20:24.952 "write_zeroes": true, 00:20:24.952 "zcopy": true, 00:20:24.952 "get_zone_info": false, 00:20:24.952 "zone_management": false, 00:20:24.952 "zone_append": false, 00:20:24.952 "compare": false, 00:20:24.952 "compare_and_write": false, 00:20:24.952 "abort": true, 00:20:24.952 "seek_hole": false, 00:20:24.952 "seek_data": false, 00:20:24.952 "copy": true, 00:20:24.952 "nvme_iov_md": false 00:20:24.952 }, 00:20:24.952 "memory_domains": [ 00:20:24.952 { 00:20:24.952 "dma_device_id": "system", 00:20:24.952 "dma_device_type": 1 00:20:24.952 }, 00:20:24.952 { 00:20:24.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.952 "dma_device_type": 2 00:20:24.952 } 00:20:24.952 ], 00:20:24.952 "driver_specific": {} 00:20:24.952 } 00:20:24.952 ] 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.952 12:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.209 12:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.209 "name": "Existed_Raid", 00:20:25.209 "uuid": "c45f10a3-c39f-48ce-9d8d-8362ef7a003c", 00:20:25.209 "strip_size_kb": 0, 00:20:25.209 "state": "online", 00:20:25.209 "raid_level": "raid1", 00:20:25.209 "superblock": false, 00:20:25.209 "num_base_bdevs": 4, 00:20:25.209 "num_base_bdevs_discovered": 4, 00:20:25.209 "num_base_bdevs_operational": 4, 00:20:25.209 "base_bdevs_list": [ 00:20:25.209 { 00:20:25.209 "name": "NewBaseBdev", 00:20:25.209 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:25.209 "is_configured": true, 00:20:25.209 "data_offset": 0, 00:20:25.209 "data_size": 65536 00:20:25.209 }, 00:20:25.209 { 00:20:25.209 "name": "BaseBdev2", 00:20:25.209 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:25.209 "is_configured": true, 00:20:25.209 "data_offset": 0, 00:20:25.209 "data_size": 65536 00:20:25.209 }, 00:20:25.209 { 00:20:25.209 "name": "BaseBdev3", 00:20:25.209 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:25.209 "is_configured": true, 00:20:25.209 "data_offset": 0, 00:20:25.209 "data_size": 65536 00:20:25.209 }, 00:20:25.209 { 00:20:25.209 "name": "BaseBdev4", 00:20:25.209 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:25.209 "is_configured": true, 00:20:25.209 "data_offset": 0, 00:20:25.209 "data_size": 65536 00:20:25.209 } 00:20:25.209 ] 00:20:25.209 }' 00:20:25.209 12:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.209 12:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:26.139 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:26.139 [2024-07-25 12:02:12.241838] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.397 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:26.397 "name": "Existed_Raid", 00:20:26.397 "aliases": [ 00:20:26.397 "c45f10a3-c39f-48ce-9d8d-8362ef7a003c" 00:20:26.397 ], 00:20:26.397 "product_name": "Raid Volume", 00:20:26.397 "block_size": 512, 00:20:26.397 "num_blocks": 65536, 00:20:26.397 "uuid": "c45f10a3-c39f-48ce-9d8d-8362ef7a003c", 00:20:26.397 "assigned_rate_limits": { 00:20:26.397 "rw_ios_per_sec": 0, 00:20:26.397 "rw_mbytes_per_sec": 0, 00:20:26.397 "r_mbytes_per_sec": 0, 00:20:26.397 "w_mbytes_per_sec": 0 00:20:26.397 }, 00:20:26.397 "claimed": false, 00:20:26.397 "zoned": false, 00:20:26.397 "supported_io_types": { 00:20:26.397 "read": true, 00:20:26.397 "write": true, 00:20:26.397 "unmap": false, 00:20:26.397 "flush": false, 00:20:26.397 "reset": true, 00:20:26.397 "nvme_admin": false, 00:20:26.397 "nvme_io": false, 00:20:26.397 "nvme_io_md": false, 00:20:26.397 "write_zeroes": true, 00:20:26.397 "zcopy": false, 00:20:26.397 "get_zone_info": false, 00:20:26.397 "zone_management": false, 00:20:26.397 "zone_append": false, 00:20:26.397 "compare": false, 00:20:26.397 "compare_and_write": false, 00:20:26.397 "abort": false, 00:20:26.397 "seek_hole": false, 00:20:26.397 "seek_data": false, 00:20:26.397 "copy": false, 00:20:26.397 "nvme_iov_md": false 00:20:26.397 }, 00:20:26.397 "memory_domains": [ 00:20:26.397 { 00:20:26.397 "dma_device_id": "system", 00:20:26.397 "dma_device_type": 1 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.397 "dma_device_type": 2 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "system", 00:20:26.397 "dma_device_type": 1 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.397 "dma_device_type": 2 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "system", 00:20:26.397 "dma_device_type": 1 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.397 "dma_device_type": 2 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "system", 00:20:26.397 "dma_device_type": 1 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.397 "dma_device_type": 2 00:20:26.397 } 00:20:26.397 ], 00:20:26.397 "driver_specific": { 00:20:26.397 "raid": { 00:20:26.397 "uuid": "c45f10a3-c39f-48ce-9d8d-8362ef7a003c", 00:20:26.397 "strip_size_kb": 0, 00:20:26.397 "state": "online", 00:20:26.397 "raid_level": "raid1", 00:20:26.397 "superblock": false, 00:20:26.397 "num_base_bdevs": 4, 00:20:26.397 "num_base_bdevs_discovered": 4, 00:20:26.397 "num_base_bdevs_operational": 4, 00:20:26.397 "base_bdevs_list": [ 00:20:26.397 { 00:20:26.397 "name": "NewBaseBdev", 00:20:26.397 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:26.397 "is_configured": true, 00:20:26.397 "data_offset": 0, 00:20:26.397 "data_size": 65536 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "name": "BaseBdev2", 00:20:26.397 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:26.397 "is_configured": true, 00:20:26.397 "data_offset": 0, 00:20:26.397 "data_size": 65536 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "name": "BaseBdev3", 00:20:26.397 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:26.397 "is_configured": true, 00:20:26.397 "data_offset": 0, 00:20:26.397 "data_size": 65536 00:20:26.397 }, 00:20:26.397 { 00:20:26.397 "name": "BaseBdev4", 00:20:26.397 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:26.397 "is_configured": true, 00:20:26.397 "data_offset": 0, 00:20:26.397 "data_size": 65536 00:20:26.397 } 00:20:26.397 ] 00:20:26.397 } 00:20:26.397 } 00:20:26.397 }' 00:20:26.397 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.397 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:26.397 BaseBdev2 00:20:26.397 BaseBdev3 00:20:26.397 BaseBdev4' 00:20:26.397 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.397 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:26.397 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.654 "name": "NewBaseBdev", 00:20:26.654 "aliases": [ 00:20:26.654 "b77d573a-47b7-40d6-8f92-60a2632582b7" 00:20:26.654 ], 00:20:26.654 "product_name": "Malloc disk", 00:20:26.654 "block_size": 512, 00:20:26.654 "num_blocks": 65536, 00:20:26.654 "uuid": "b77d573a-47b7-40d6-8f92-60a2632582b7", 00:20:26.654 "assigned_rate_limits": { 00:20:26.654 "rw_ios_per_sec": 0, 00:20:26.654 "rw_mbytes_per_sec": 0, 00:20:26.654 "r_mbytes_per_sec": 0, 00:20:26.654 "w_mbytes_per_sec": 0 00:20:26.654 }, 00:20:26.654 "claimed": true, 00:20:26.654 "claim_type": "exclusive_write", 00:20:26.654 "zoned": false, 00:20:26.654 "supported_io_types": { 00:20:26.654 "read": true, 00:20:26.654 "write": true, 00:20:26.654 "unmap": true, 00:20:26.654 "flush": true, 00:20:26.654 "reset": true, 00:20:26.654 "nvme_admin": false, 00:20:26.654 "nvme_io": false, 00:20:26.654 "nvme_io_md": false, 00:20:26.654 "write_zeroes": true, 00:20:26.654 "zcopy": true, 00:20:26.654 "get_zone_info": false, 00:20:26.654 "zone_management": false, 00:20:26.654 "zone_append": false, 00:20:26.654 "compare": false, 00:20:26.654 "compare_and_write": false, 00:20:26.654 "abort": true, 00:20:26.654 "seek_hole": false, 00:20:26.654 "seek_data": false, 00:20:26.654 "copy": true, 00:20:26.654 "nvme_iov_md": false 00:20:26.654 }, 00:20:26.654 "memory_domains": [ 00:20:26.654 { 00:20:26.654 "dma_device_id": "system", 00:20:26.654 "dma_device_type": 1 00:20:26.654 }, 00:20:26.654 { 00:20:26.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.654 "dma_device_type": 2 00:20:26.654 } 00:20:26.654 ], 00:20:26.654 "driver_specific": {} 00:20:26.654 }' 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.654 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.911 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.911 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:26.911 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.911 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:26.911 12:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.167 "name": "BaseBdev2", 00:20:27.167 "aliases": [ 00:20:27.167 "ce7b4dbd-0d7b-47c0-957b-326905120fa4" 00:20:27.167 ], 00:20:27.167 "product_name": "Malloc disk", 00:20:27.167 "block_size": 512, 00:20:27.167 "num_blocks": 65536, 00:20:27.167 "uuid": "ce7b4dbd-0d7b-47c0-957b-326905120fa4", 00:20:27.167 "assigned_rate_limits": { 00:20:27.167 "rw_ios_per_sec": 0, 00:20:27.167 "rw_mbytes_per_sec": 0, 00:20:27.167 "r_mbytes_per_sec": 0, 00:20:27.167 "w_mbytes_per_sec": 0 00:20:27.167 }, 00:20:27.167 "claimed": true, 00:20:27.167 "claim_type": "exclusive_write", 00:20:27.167 "zoned": false, 00:20:27.167 "supported_io_types": { 00:20:27.167 "read": true, 00:20:27.167 "write": true, 00:20:27.167 "unmap": true, 00:20:27.167 "flush": true, 00:20:27.167 "reset": true, 00:20:27.167 "nvme_admin": false, 00:20:27.167 "nvme_io": false, 00:20:27.167 "nvme_io_md": false, 00:20:27.167 "write_zeroes": true, 00:20:27.167 "zcopy": true, 00:20:27.167 "get_zone_info": false, 00:20:27.167 "zone_management": false, 00:20:27.167 "zone_append": false, 00:20:27.167 "compare": false, 00:20:27.167 "compare_and_write": false, 00:20:27.167 "abort": true, 00:20:27.167 "seek_hole": false, 00:20:27.167 "seek_data": false, 00:20:27.167 "copy": true, 00:20:27.167 "nvme_iov_md": false 00:20:27.167 }, 00:20:27.167 "memory_domains": [ 00:20:27.167 { 00:20:27.167 "dma_device_id": "system", 00:20:27.167 "dma_device_type": 1 00:20:27.167 }, 00:20:27.167 { 00:20:27.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.167 "dma_device_type": 2 00:20:27.167 } 00:20:27.167 ], 00:20:27.167 "driver_specific": {} 00:20:27.167 }' 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.167 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:27.425 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.681 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.681 "name": "BaseBdev3", 00:20:27.681 "aliases": [ 00:20:27.681 "8421a373-313b-4341-b73c-2d8c48ce64dd" 00:20:27.681 ], 00:20:27.681 "product_name": "Malloc disk", 00:20:27.681 "block_size": 512, 00:20:27.681 "num_blocks": 65536, 00:20:27.681 "uuid": "8421a373-313b-4341-b73c-2d8c48ce64dd", 00:20:27.681 "assigned_rate_limits": { 00:20:27.681 "rw_ios_per_sec": 0, 00:20:27.681 "rw_mbytes_per_sec": 0, 00:20:27.681 "r_mbytes_per_sec": 0, 00:20:27.681 "w_mbytes_per_sec": 0 00:20:27.681 }, 00:20:27.681 "claimed": true, 00:20:27.681 "claim_type": "exclusive_write", 00:20:27.681 "zoned": false, 00:20:27.681 "supported_io_types": { 00:20:27.681 "read": true, 00:20:27.681 "write": true, 00:20:27.681 "unmap": true, 00:20:27.681 "flush": true, 00:20:27.681 "reset": true, 00:20:27.681 "nvme_admin": false, 00:20:27.681 "nvme_io": false, 00:20:27.681 "nvme_io_md": false, 00:20:27.682 "write_zeroes": true, 00:20:27.682 "zcopy": true, 00:20:27.682 "get_zone_info": false, 00:20:27.682 "zone_management": false, 00:20:27.682 "zone_append": false, 00:20:27.682 "compare": false, 00:20:27.682 "compare_and_write": false, 00:20:27.682 "abort": true, 00:20:27.682 "seek_hole": false, 00:20:27.682 "seek_data": false, 00:20:27.682 "copy": true, 00:20:27.682 "nvme_iov_md": false 00:20:27.682 }, 00:20:27.682 "memory_domains": [ 00:20:27.682 { 00:20:27.682 "dma_device_id": "system", 00:20:27.682 "dma_device_type": 1 00:20:27.682 }, 00:20:27.682 { 00:20:27.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.682 "dma_device_type": 2 00:20:27.682 } 00:20:27.682 ], 00:20:27.682 "driver_specific": {} 00:20:27.682 }' 00:20:27.682 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.682 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.682 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.682 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.682 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.939 12:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.939 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:27.939 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.197 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.197 "name": "BaseBdev4", 00:20:28.197 "aliases": [ 00:20:28.197 "c7ced2e9-032c-4475-9f5c-7803e3626110" 00:20:28.197 ], 00:20:28.197 "product_name": "Malloc disk", 00:20:28.197 "block_size": 512, 00:20:28.197 "num_blocks": 65536, 00:20:28.197 "uuid": "c7ced2e9-032c-4475-9f5c-7803e3626110", 00:20:28.197 "assigned_rate_limits": { 00:20:28.197 "rw_ios_per_sec": 0, 00:20:28.197 "rw_mbytes_per_sec": 0, 00:20:28.197 "r_mbytes_per_sec": 0, 00:20:28.197 "w_mbytes_per_sec": 0 00:20:28.197 }, 00:20:28.197 "claimed": true, 00:20:28.197 "claim_type": "exclusive_write", 00:20:28.197 "zoned": false, 00:20:28.197 "supported_io_types": { 00:20:28.197 "read": true, 00:20:28.197 "write": true, 00:20:28.197 "unmap": true, 00:20:28.197 "flush": true, 00:20:28.197 "reset": true, 00:20:28.197 "nvme_admin": false, 00:20:28.197 "nvme_io": false, 00:20:28.197 "nvme_io_md": false, 00:20:28.197 "write_zeroes": true, 00:20:28.197 "zcopy": true, 00:20:28.197 "get_zone_info": false, 00:20:28.197 "zone_management": false, 00:20:28.197 "zone_append": false, 00:20:28.197 "compare": false, 00:20:28.197 "compare_and_write": false, 00:20:28.197 "abort": true, 00:20:28.197 "seek_hole": false, 00:20:28.197 "seek_data": false, 00:20:28.197 "copy": true, 00:20:28.197 "nvme_iov_md": false 00:20:28.197 }, 00:20:28.197 "memory_domains": [ 00:20:28.197 { 00:20:28.197 "dma_device_id": "system", 00:20:28.197 "dma_device_type": 1 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.197 "dma_device_type": 2 00:20:28.197 } 00:20:28.197 ], 00:20:28.197 "driver_specific": {} 00:20:28.197 }' 00:20:28.197 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.197 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.197 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.197 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.454 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.454 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.454 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.454 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.455 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.455 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.455 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.455 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.455 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:28.714 [2024-07-25 12:02:14.744228] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.714 [2024-07-25 12:02:14.744250] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.714 [2024-07-25 12:02:14.744299] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.714 [2024-07-25 12:02:14.744544] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.714 [2024-07-25 12:02:14.744556] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x265ad30 name Existed_Raid, state offline 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 7080 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 7080 ']' 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 7080 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 7080 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 7080' 00:20:28.714 killing process with pid 7080 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 7080 00:20:28.714 [2024-07-25 12:02:14.806911] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.714 12:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 7080 00:20:28.972 [2024-07-25 12:02:14.839310] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.972 12:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:28.972 00:20:28.972 real 0m30.829s 00:20:28.972 user 0m56.714s 00:20:28.972 sys 0m5.442s 00:20:28.973 12:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.973 12:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.973 ************************************ 00:20:28.973 END TEST raid_state_function_test 00:20:28.973 ************************************ 00:20:28.973 12:02:15 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:20:28.973 12:02:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:28.973 12:02:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.973 12:02:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.232 ************************************ 00:20:29.232 START TEST raid_state_function_test_sb 00:20:29.232 ************************************ 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=12935 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 12935' 00:20:29.232 Process raid pid: 12935 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:29.232 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 12935 /var/tmp/spdk-raid.sock 00:20:29.233 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 12935 ']' 00:20:29.233 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:29.233 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.233 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:29.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:29.233 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.233 12:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.233 [2024-07-25 12:02:15.178733] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:29.233 [2024-07-25 12:02:15.178788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:29.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:29.233 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:29.233 [2024-07-25 12:02:15.310522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.492 [2024-07-25 12:02:15.393488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.492 [2024-07-25 12:02:15.451000] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.492 [2024-07-25 12:02:15.451033] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.060 12:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.060 12:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:30.060 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:30.319 [2024-07-25 12:02:16.281121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:30.319 [2024-07-25 12:02:16.281161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:30.319 [2024-07-25 12:02:16.281172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.319 [2024-07-25 12:02:16.281183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.319 [2024-07-25 12:02:16.281191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.319 [2024-07-25 12:02:16.281201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.319 [2024-07-25 12:02:16.281209] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:30.319 [2024-07-25 12:02:16.281218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.319 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.578 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.578 "name": "Existed_Raid", 00:20:30.578 "uuid": "ff4bc6cc-9298-41f3-9384-80461524db8e", 00:20:30.578 "strip_size_kb": 0, 00:20:30.578 "state": "configuring", 00:20:30.578 "raid_level": "raid1", 00:20:30.578 "superblock": true, 00:20:30.578 "num_base_bdevs": 4, 00:20:30.578 "num_base_bdevs_discovered": 0, 00:20:30.578 "num_base_bdevs_operational": 4, 00:20:30.578 "base_bdevs_list": [ 00:20:30.578 { 00:20:30.578 "name": "BaseBdev1", 00:20:30.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.578 "is_configured": false, 00:20:30.578 "data_offset": 0, 00:20:30.578 "data_size": 0 00:20:30.578 }, 00:20:30.578 { 00:20:30.578 "name": "BaseBdev2", 00:20:30.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.578 "is_configured": false, 00:20:30.578 "data_offset": 0, 00:20:30.578 "data_size": 0 00:20:30.578 }, 00:20:30.578 { 00:20:30.578 "name": "BaseBdev3", 00:20:30.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.578 "is_configured": false, 00:20:30.578 "data_offset": 0, 00:20:30.578 "data_size": 0 00:20:30.578 }, 00:20:30.578 { 00:20:30.578 "name": "BaseBdev4", 00:20:30.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.578 "is_configured": false, 00:20:30.578 "data_offset": 0, 00:20:30.578 "data_size": 0 00:20:30.578 } 00:20:30.578 ] 00:20:30.578 }' 00:20:30.578 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.578 12:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.146 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:31.146 [2024-07-25 12:02:17.259559] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:31.146 [2024-07-25 12:02:17.259588] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e47f60 name Existed_Raid, state configuring 00:20:31.405 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:31.405 [2024-07-25 12:02:17.484178] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.405 [2024-07-25 12:02:17.484208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.405 [2024-07-25 12:02:17.484217] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.405 [2024-07-25 12:02:17.484228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.405 [2024-07-25 12:02:17.484236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.405 [2024-07-25 12:02:17.484246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.405 [2024-07-25 12:02:17.484254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:31.405 [2024-07-25 12:02:17.484264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:31.405 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:31.663 [2024-07-25 12:02:17.706110] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.663 BaseBdev1 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:31.663 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.922 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:32.184 [ 00:20:32.184 { 00:20:32.184 "name": "BaseBdev1", 00:20:32.184 "aliases": [ 00:20:32.184 "647f8da7-eb05-41ec-a5a7-bec7547abb59" 00:20:32.184 ], 00:20:32.184 "product_name": "Malloc disk", 00:20:32.185 "block_size": 512, 00:20:32.185 "num_blocks": 65536, 00:20:32.185 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:32.185 "assigned_rate_limits": { 00:20:32.185 "rw_ios_per_sec": 0, 00:20:32.185 "rw_mbytes_per_sec": 0, 00:20:32.185 "r_mbytes_per_sec": 0, 00:20:32.185 "w_mbytes_per_sec": 0 00:20:32.185 }, 00:20:32.185 "claimed": true, 00:20:32.185 "claim_type": "exclusive_write", 00:20:32.185 "zoned": false, 00:20:32.185 "supported_io_types": { 00:20:32.185 "read": true, 00:20:32.185 "write": true, 00:20:32.185 "unmap": true, 00:20:32.185 "flush": true, 00:20:32.185 "reset": true, 00:20:32.185 "nvme_admin": false, 00:20:32.185 "nvme_io": false, 00:20:32.185 "nvme_io_md": false, 00:20:32.185 "write_zeroes": true, 00:20:32.185 "zcopy": true, 00:20:32.185 "get_zone_info": false, 00:20:32.185 "zone_management": false, 00:20:32.185 "zone_append": false, 00:20:32.185 "compare": false, 00:20:32.185 "compare_and_write": false, 00:20:32.185 "abort": true, 00:20:32.185 "seek_hole": false, 00:20:32.185 "seek_data": false, 00:20:32.185 "copy": true, 00:20:32.185 "nvme_iov_md": false 00:20:32.185 }, 00:20:32.185 "memory_domains": [ 00:20:32.185 { 00:20:32.185 "dma_device_id": "system", 00:20:32.185 "dma_device_type": 1 00:20:32.185 }, 00:20:32.185 { 00:20:32.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.185 "dma_device_type": 2 00:20:32.185 } 00:20:32.185 ], 00:20:32.185 "driver_specific": {} 00:20:32.185 } 00:20:32.185 ] 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.185 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.444 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.444 "name": "Existed_Raid", 00:20:32.444 "uuid": "9859ec13-ed92-4113-a271-51bb597339a8", 00:20:32.444 "strip_size_kb": 0, 00:20:32.444 "state": "configuring", 00:20:32.444 "raid_level": "raid1", 00:20:32.444 "superblock": true, 00:20:32.444 "num_base_bdevs": 4, 00:20:32.444 "num_base_bdevs_discovered": 1, 00:20:32.444 "num_base_bdevs_operational": 4, 00:20:32.445 "base_bdevs_list": [ 00:20:32.445 { 00:20:32.445 "name": "BaseBdev1", 00:20:32.445 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:32.445 "is_configured": true, 00:20:32.445 "data_offset": 2048, 00:20:32.445 "data_size": 63488 00:20:32.445 }, 00:20:32.445 { 00:20:32.445 "name": "BaseBdev2", 00:20:32.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.445 "is_configured": false, 00:20:32.445 "data_offset": 0, 00:20:32.445 "data_size": 0 00:20:32.445 }, 00:20:32.445 { 00:20:32.445 "name": "BaseBdev3", 00:20:32.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.445 "is_configured": false, 00:20:32.445 "data_offset": 0, 00:20:32.445 "data_size": 0 00:20:32.445 }, 00:20:32.445 { 00:20:32.445 "name": "BaseBdev4", 00:20:32.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.445 "is_configured": false, 00:20:32.445 "data_offset": 0, 00:20:32.445 "data_size": 0 00:20:32.445 } 00:20:32.445 ] 00:20:32.445 }' 00:20:32.445 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.445 12:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.013 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:33.272 [2024-07-25 12:02:19.169940] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:33.272 [2024-07-25 12:02:19.169982] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e477d0 name Existed_Raid, state configuring 00:20:33.272 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:33.540 [2024-07-25 12:02:19.398588] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.540 [2024-07-25 12:02:19.400027] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:33.540 [2024-07-25 12:02:19.400061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:33.540 [2024-07-25 12:02:19.400071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:33.540 [2024-07-25 12:02:19.400081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:33.540 [2024-07-25 12:02:19.400090] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:33.540 [2024-07-25 12:02:19.400100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.540 "name": "Existed_Raid", 00:20:33.540 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:33.540 "strip_size_kb": 0, 00:20:33.540 "state": "configuring", 00:20:33.540 "raid_level": "raid1", 00:20:33.540 "superblock": true, 00:20:33.540 "num_base_bdevs": 4, 00:20:33.540 "num_base_bdevs_discovered": 1, 00:20:33.540 "num_base_bdevs_operational": 4, 00:20:33.540 "base_bdevs_list": [ 00:20:33.540 { 00:20:33.540 "name": "BaseBdev1", 00:20:33.540 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:33.540 "is_configured": true, 00:20:33.540 "data_offset": 2048, 00:20:33.540 "data_size": 63488 00:20:33.540 }, 00:20:33.540 { 00:20:33.540 "name": "BaseBdev2", 00:20:33.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.540 "is_configured": false, 00:20:33.540 "data_offset": 0, 00:20:33.540 "data_size": 0 00:20:33.540 }, 00:20:33.540 { 00:20:33.540 "name": "BaseBdev3", 00:20:33.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.540 "is_configured": false, 00:20:33.540 "data_offset": 0, 00:20:33.540 "data_size": 0 00:20:33.540 }, 00:20:33.540 { 00:20:33.540 "name": "BaseBdev4", 00:20:33.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.540 "is_configured": false, 00:20:33.540 "data_offset": 0, 00:20:33.540 "data_size": 0 00:20:33.540 } 00:20:33.540 ] 00:20:33.540 }' 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.540 12:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.144 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:34.403 [2024-07-25 12:02:20.400380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.403 BaseBdev2 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:34.403 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.662 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:34.921 [ 00:20:34.921 { 00:20:34.921 "name": "BaseBdev2", 00:20:34.921 "aliases": [ 00:20:34.921 "ed969358-90f4-4726-8a7d-f715aabf2940" 00:20:34.921 ], 00:20:34.921 "product_name": "Malloc disk", 00:20:34.921 "block_size": 512, 00:20:34.921 "num_blocks": 65536, 00:20:34.921 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:34.921 "assigned_rate_limits": { 00:20:34.921 "rw_ios_per_sec": 0, 00:20:34.921 "rw_mbytes_per_sec": 0, 00:20:34.921 "r_mbytes_per_sec": 0, 00:20:34.921 "w_mbytes_per_sec": 0 00:20:34.921 }, 00:20:34.921 "claimed": true, 00:20:34.921 "claim_type": "exclusive_write", 00:20:34.921 "zoned": false, 00:20:34.921 "supported_io_types": { 00:20:34.921 "read": true, 00:20:34.921 "write": true, 00:20:34.921 "unmap": true, 00:20:34.921 "flush": true, 00:20:34.921 "reset": true, 00:20:34.921 "nvme_admin": false, 00:20:34.921 "nvme_io": false, 00:20:34.921 "nvme_io_md": false, 00:20:34.921 "write_zeroes": true, 00:20:34.921 "zcopy": true, 00:20:34.921 "get_zone_info": false, 00:20:34.921 "zone_management": false, 00:20:34.921 "zone_append": false, 00:20:34.921 "compare": false, 00:20:34.921 "compare_and_write": false, 00:20:34.921 "abort": true, 00:20:34.921 "seek_hole": false, 00:20:34.921 "seek_data": false, 00:20:34.921 "copy": true, 00:20:34.921 "nvme_iov_md": false 00:20:34.921 }, 00:20:34.921 "memory_domains": [ 00:20:34.921 { 00:20:34.921 "dma_device_id": "system", 00:20:34.921 "dma_device_type": 1 00:20:34.921 }, 00:20:34.921 { 00:20:34.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.921 "dma_device_type": 2 00:20:34.921 } 00:20:34.921 ], 00:20:34.921 "driver_specific": {} 00:20:34.921 } 00:20:34.921 ] 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:34.921 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.922 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.181 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.181 "name": "Existed_Raid", 00:20:35.181 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:35.181 "strip_size_kb": 0, 00:20:35.181 "state": "configuring", 00:20:35.181 "raid_level": "raid1", 00:20:35.181 "superblock": true, 00:20:35.181 "num_base_bdevs": 4, 00:20:35.181 "num_base_bdevs_discovered": 2, 00:20:35.181 "num_base_bdevs_operational": 4, 00:20:35.181 "base_bdevs_list": [ 00:20:35.181 { 00:20:35.181 "name": "BaseBdev1", 00:20:35.181 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:35.181 "is_configured": true, 00:20:35.181 "data_offset": 2048, 00:20:35.181 "data_size": 63488 00:20:35.181 }, 00:20:35.181 { 00:20:35.181 "name": "BaseBdev2", 00:20:35.181 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:35.181 "is_configured": true, 00:20:35.181 "data_offset": 2048, 00:20:35.181 "data_size": 63488 00:20:35.181 }, 00:20:35.181 { 00:20:35.181 "name": "BaseBdev3", 00:20:35.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.181 "is_configured": false, 00:20:35.181 "data_offset": 0, 00:20:35.181 "data_size": 0 00:20:35.181 }, 00:20:35.181 { 00:20:35.181 "name": "BaseBdev4", 00:20:35.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.181 "is_configured": false, 00:20:35.181 "data_offset": 0, 00:20:35.181 "data_size": 0 00:20:35.181 } 00:20:35.181 ] 00:20:35.181 }' 00:20:35.181 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.181 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:35.749 [2024-07-25 12:02:21.775241] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:35.749 BaseBdev3 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:35.749 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.008 12:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:36.267 [ 00:20:36.267 { 00:20:36.267 "name": "BaseBdev3", 00:20:36.267 "aliases": [ 00:20:36.267 "78c109ac-504d-4791-901b-18f8cf989ebb" 00:20:36.267 ], 00:20:36.267 "product_name": "Malloc disk", 00:20:36.267 "block_size": 512, 00:20:36.267 "num_blocks": 65536, 00:20:36.267 "uuid": "78c109ac-504d-4791-901b-18f8cf989ebb", 00:20:36.267 "assigned_rate_limits": { 00:20:36.267 "rw_ios_per_sec": 0, 00:20:36.267 "rw_mbytes_per_sec": 0, 00:20:36.267 "r_mbytes_per_sec": 0, 00:20:36.267 "w_mbytes_per_sec": 0 00:20:36.267 }, 00:20:36.267 "claimed": true, 00:20:36.267 "claim_type": "exclusive_write", 00:20:36.267 "zoned": false, 00:20:36.267 "supported_io_types": { 00:20:36.267 "read": true, 00:20:36.267 "write": true, 00:20:36.267 "unmap": true, 00:20:36.267 "flush": true, 00:20:36.267 "reset": true, 00:20:36.267 "nvme_admin": false, 00:20:36.267 "nvme_io": false, 00:20:36.267 "nvme_io_md": false, 00:20:36.267 "write_zeroes": true, 00:20:36.267 "zcopy": true, 00:20:36.267 "get_zone_info": false, 00:20:36.267 "zone_management": false, 00:20:36.267 "zone_append": false, 00:20:36.267 "compare": false, 00:20:36.267 "compare_and_write": false, 00:20:36.267 "abort": true, 00:20:36.267 "seek_hole": false, 00:20:36.267 "seek_data": false, 00:20:36.267 "copy": true, 00:20:36.267 "nvme_iov_md": false 00:20:36.267 }, 00:20:36.267 "memory_domains": [ 00:20:36.267 { 00:20:36.267 "dma_device_id": "system", 00:20:36.267 "dma_device_type": 1 00:20:36.267 }, 00:20:36.267 { 00:20:36.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.267 "dma_device_type": 2 00:20:36.267 } 00:20:36.267 ], 00:20:36.267 "driver_specific": {} 00:20:36.267 } 00:20:36.267 ] 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.267 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.526 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.526 "name": "Existed_Raid", 00:20:36.526 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:36.526 "strip_size_kb": 0, 00:20:36.526 "state": "configuring", 00:20:36.526 "raid_level": "raid1", 00:20:36.526 "superblock": true, 00:20:36.526 "num_base_bdevs": 4, 00:20:36.526 "num_base_bdevs_discovered": 3, 00:20:36.526 "num_base_bdevs_operational": 4, 00:20:36.526 "base_bdevs_list": [ 00:20:36.526 { 00:20:36.526 "name": "BaseBdev1", 00:20:36.526 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:36.526 "is_configured": true, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 }, 00:20:36.526 { 00:20:36.526 "name": "BaseBdev2", 00:20:36.526 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:36.526 "is_configured": true, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 }, 00:20:36.526 { 00:20:36.526 "name": "BaseBdev3", 00:20:36.526 "uuid": "78c109ac-504d-4791-901b-18f8cf989ebb", 00:20:36.526 "is_configured": true, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 }, 00:20:36.526 { 00:20:36.526 "name": "BaseBdev4", 00:20:36.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.526 "is_configured": false, 00:20:36.526 "data_offset": 0, 00:20:36.526 "data_size": 0 00:20:36.526 } 00:20:36.526 ] 00:20:36.526 }' 00:20:36.526 12:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.526 12:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.093 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:37.352 [2024-07-25 12:02:23.254446] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:37.352 [2024-07-25 12:02:23.254607] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e48830 00:20:37.352 [2024-07-25 12:02:23.254620] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:37.352 [2024-07-25 12:02:23.254794] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e3f360 00:20:37.352 [2024-07-25 12:02:23.254916] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e48830 00:20:37.352 [2024-07-25 12:02:23.254925] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1e48830 00:20:37.352 [2024-07-25 12:02:23.255014] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.352 BaseBdev4 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:37.352 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.611 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:37.611 [ 00:20:37.611 { 00:20:37.611 "name": "BaseBdev4", 00:20:37.611 "aliases": [ 00:20:37.611 "c5be3908-39d1-4cca-b79a-ca88e1634a1a" 00:20:37.611 ], 00:20:37.611 "product_name": "Malloc disk", 00:20:37.611 "block_size": 512, 00:20:37.611 "num_blocks": 65536, 00:20:37.611 "uuid": "c5be3908-39d1-4cca-b79a-ca88e1634a1a", 00:20:37.611 "assigned_rate_limits": { 00:20:37.611 "rw_ios_per_sec": 0, 00:20:37.611 "rw_mbytes_per_sec": 0, 00:20:37.611 "r_mbytes_per_sec": 0, 00:20:37.611 "w_mbytes_per_sec": 0 00:20:37.611 }, 00:20:37.611 "claimed": true, 00:20:37.611 "claim_type": "exclusive_write", 00:20:37.611 "zoned": false, 00:20:37.611 "supported_io_types": { 00:20:37.611 "read": true, 00:20:37.611 "write": true, 00:20:37.611 "unmap": true, 00:20:37.611 "flush": true, 00:20:37.611 "reset": true, 00:20:37.611 "nvme_admin": false, 00:20:37.611 "nvme_io": false, 00:20:37.612 "nvme_io_md": false, 00:20:37.612 "write_zeroes": true, 00:20:37.612 "zcopy": true, 00:20:37.612 "get_zone_info": false, 00:20:37.612 "zone_management": false, 00:20:37.612 "zone_append": false, 00:20:37.612 "compare": false, 00:20:37.612 "compare_and_write": false, 00:20:37.612 "abort": true, 00:20:37.612 "seek_hole": false, 00:20:37.612 "seek_data": false, 00:20:37.612 "copy": true, 00:20:37.612 "nvme_iov_md": false 00:20:37.612 }, 00:20:37.612 "memory_domains": [ 00:20:37.612 { 00:20:37.612 "dma_device_id": "system", 00:20:37.612 "dma_device_type": 1 00:20:37.612 }, 00:20:37.612 { 00:20:37.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.612 "dma_device_type": 2 00:20:37.612 } 00:20:37.612 ], 00:20:37.612 "driver_specific": {} 00:20:37.612 } 00:20:37.612 ] 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.870 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.871 "name": "Existed_Raid", 00:20:37.871 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:37.871 "strip_size_kb": 0, 00:20:37.871 "state": "online", 00:20:37.871 "raid_level": "raid1", 00:20:37.871 "superblock": true, 00:20:37.871 "num_base_bdevs": 4, 00:20:37.871 "num_base_bdevs_discovered": 4, 00:20:37.871 "num_base_bdevs_operational": 4, 00:20:37.871 "base_bdevs_list": [ 00:20:37.871 { 00:20:37.871 "name": "BaseBdev1", 00:20:37.871 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:37.871 "is_configured": true, 00:20:37.871 "data_offset": 2048, 00:20:37.871 "data_size": 63488 00:20:37.871 }, 00:20:37.871 { 00:20:37.871 "name": "BaseBdev2", 00:20:37.871 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:37.871 "is_configured": true, 00:20:37.871 "data_offset": 2048, 00:20:37.871 "data_size": 63488 00:20:37.871 }, 00:20:37.871 { 00:20:37.871 "name": "BaseBdev3", 00:20:37.871 "uuid": "78c109ac-504d-4791-901b-18f8cf989ebb", 00:20:37.871 "is_configured": true, 00:20:37.871 "data_offset": 2048, 00:20:37.871 "data_size": 63488 00:20:37.871 }, 00:20:37.871 { 00:20:37.871 "name": "BaseBdev4", 00:20:37.871 "uuid": "c5be3908-39d1-4cca-b79a-ca88e1634a1a", 00:20:37.871 "is_configured": true, 00:20:37.871 "data_offset": 2048, 00:20:37.871 "data_size": 63488 00:20:37.871 } 00:20:37.871 ] 00:20:37.871 }' 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.871 12:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:38.438 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:38.698 [2024-07-25 12:02:24.750702] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.698 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:38.698 "name": "Existed_Raid", 00:20:38.698 "aliases": [ 00:20:38.698 "aca7ffa9-2650-4dc4-a072-13e328df3daf" 00:20:38.698 ], 00:20:38.698 "product_name": "Raid Volume", 00:20:38.698 "block_size": 512, 00:20:38.698 "num_blocks": 63488, 00:20:38.698 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:38.698 "assigned_rate_limits": { 00:20:38.698 "rw_ios_per_sec": 0, 00:20:38.698 "rw_mbytes_per_sec": 0, 00:20:38.698 "r_mbytes_per_sec": 0, 00:20:38.698 "w_mbytes_per_sec": 0 00:20:38.698 }, 00:20:38.698 "claimed": false, 00:20:38.698 "zoned": false, 00:20:38.698 "supported_io_types": { 00:20:38.698 "read": true, 00:20:38.698 "write": true, 00:20:38.698 "unmap": false, 00:20:38.698 "flush": false, 00:20:38.698 "reset": true, 00:20:38.698 "nvme_admin": false, 00:20:38.698 "nvme_io": false, 00:20:38.698 "nvme_io_md": false, 00:20:38.698 "write_zeroes": true, 00:20:38.698 "zcopy": false, 00:20:38.698 "get_zone_info": false, 00:20:38.698 "zone_management": false, 00:20:38.698 "zone_append": false, 00:20:38.698 "compare": false, 00:20:38.698 "compare_and_write": false, 00:20:38.698 "abort": false, 00:20:38.698 "seek_hole": false, 00:20:38.698 "seek_data": false, 00:20:38.698 "copy": false, 00:20:38.698 "nvme_iov_md": false 00:20:38.698 }, 00:20:38.698 "memory_domains": [ 00:20:38.698 { 00:20:38.698 "dma_device_id": "system", 00:20:38.698 "dma_device_type": 1 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.698 "dma_device_type": 2 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "system", 00:20:38.698 "dma_device_type": 1 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.698 "dma_device_type": 2 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "system", 00:20:38.698 "dma_device_type": 1 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.698 "dma_device_type": 2 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "system", 00:20:38.698 "dma_device_type": 1 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.698 "dma_device_type": 2 00:20:38.698 } 00:20:38.698 ], 00:20:38.698 "driver_specific": { 00:20:38.698 "raid": { 00:20:38.698 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:38.698 "strip_size_kb": 0, 00:20:38.698 "state": "online", 00:20:38.698 "raid_level": "raid1", 00:20:38.698 "superblock": true, 00:20:38.698 "num_base_bdevs": 4, 00:20:38.698 "num_base_bdevs_discovered": 4, 00:20:38.698 "num_base_bdevs_operational": 4, 00:20:38.698 "base_bdevs_list": [ 00:20:38.698 { 00:20:38.698 "name": "BaseBdev1", 00:20:38.698 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:38.698 "is_configured": true, 00:20:38.698 "data_offset": 2048, 00:20:38.698 "data_size": 63488 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "name": "BaseBdev2", 00:20:38.698 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:38.698 "is_configured": true, 00:20:38.698 "data_offset": 2048, 00:20:38.698 "data_size": 63488 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "name": "BaseBdev3", 00:20:38.698 "uuid": "78c109ac-504d-4791-901b-18f8cf989ebb", 00:20:38.698 "is_configured": true, 00:20:38.698 "data_offset": 2048, 00:20:38.698 "data_size": 63488 00:20:38.698 }, 00:20:38.698 { 00:20:38.698 "name": "BaseBdev4", 00:20:38.698 "uuid": "c5be3908-39d1-4cca-b79a-ca88e1634a1a", 00:20:38.698 "is_configured": true, 00:20:38.698 "data_offset": 2048, 00:20:38.698 "data_size": 63488 00:20:38.698 } 00:20:38.698 ] 00:20:38.698 } 00:20:38.698 } 00:20:38.698 }' 00:20:38.698 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.958 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:38.958 BaseBdev2 00:20:38.958 BaseBdev3 00:20:38.958 BaseBdev4' 00:20:38.958 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:38.958 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:38.958 12:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:38.958 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:38.958 "name": "BaseBdev1", 00:20:38.958 "aliases": [ 00:20:38.958 "647f8da7-eb05-41ec-a5a7-bec7547abb59" 00:20:38.958 ], 00:20:38.958 "product_name": "Malloc disk", 00:20:38.958 "block_size": 512, 00:20:38.958 "num_blocks": 65536, 00:20:38.958 "uuid": "647f8da7-eb05-41ec-a5a7-bec7547abb59", 00:20:38.958 "assigned_rate_limits": { 00:20:38.958 "rw_ios_per_sec": 0, 00:20:38.958 "rw_mbytes_per_sec": 0, 00:20:38.958 "r_mbytes_per_sec": 0, 00:20:38.958 "w_mbytes_per_sec": 0 00:20:38.958 }, 00:20:38.958 "claimed": true, 00:20:38.958 "claim_type": "exclusive_write", 00:20:38.958 "zoned": false, 00:20:38.958 "supported_io_types": { 00:20:38.958 "read": true, 00:20:38.958 "write": true, 00:20:38.958 "unmap": true, 00:20:38.958 "flush": true, 00:20:38.958 "reset": true, 00:20:38.958 "nvme_admin": false, 00:20:38.958 "nvme_io": false, 00:20:38.958 "nvme_io_md": false, 00:20:38.958 "write_zeroes": true, 00:20:38.958 "zcopy": true, 00:20:38.958 "get_zone_info": false, 00:20:38.958 "zone_management": false, 00:20:38.958 "zone_append": false, 00:20:38.958 "compare": false, 00:20:38.958 "compare_and_write": false, 00:20:38.958 "abort": true, 00:20:38.958 "seek_hole": false, 00:20:38.958 "seek_data": false, 00:20:38.958 "copy": true, 00:20:38.958 "nvme_iov_md": false 00:20:38.958 }, 00:20:38.958 "memory_domains": [ 00:20:38.958 { 00:20:38.958 "dma_device_id": "system", 00:20:38.958 "dma_device_type": 1 00:20:38.958 }, 00:20:38.958 { 00:20:38.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.958 "dma_device_type": 2 00:20:38.958 } 00:20:38.958 ], 00:20:38.958 "driver_specific": {} 00:20:38.958 }' 00:20:38.958 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.217 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.476 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.476 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.476 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.476 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:39.476 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.735 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.735 "name": "BaseBdev2", 00:20:39.735 "aliases": [ 00:20:39.735 "ed969358-90f4-4726-8a7d-f715aabf2940" 00:20:39.735 ], 00:20:39.735 "product_name": "Malloc disk", 00:20:39.735 "block_size": 512, 00:20:39.735 "num_blocks": 65536, 00:20:39.735 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:39.735 "assigned_rate_limits": { 00:20:39.735 "rw_ios_per_sec": 0, 00:20:39.735 "rw_mbytes_per_sec": 0, 00:20:39.735 "r_mbytes_per_sec": 0, 00:20:39.735 "w_mbytes_per_sec": 0 00:20:39.735 }, 00:20:39.735 "claimed": true, 00:20:39.735 "claim_type": "exclusive_write", 00:20:39.735 "zoned": false, 00:20:39.735 "supported_io_types": { 00:20:39.735 "read": true, 00:20:39.735 "write": true, 00:20:39.735 "unmap": true, 00:20:39.735 "flush": true, 00:20:39.735 "reset": true, 00:20:39.735 "nvme_admin": false, 00:20:39.735 "nvme_io": false, 00:20:39.736 "nvme_io_md": false, 00:20:39.736 "write_zeroes": true, 00:20:39.736 "zcopy": true, 00:20:39.736 "get_zone_info": false, 00:20:39.736 "zone_management": false, 00:20:39.736 "zone_append": false, 00:20:39.736 "compare": false, 00:20:39.736 "compare_and_write": false, 00:20:39.736 "abort": true, 00:20:39.736 "seek_hole": false, 00:20:39.736 "seek_data": false, 00:20:39.736 "copy": true, 00:20:39.736 "nvme_iov_md": false 00:20:39.736 }, 00:20:39.736 "memory_domains": [ 00:20:39.736 { 00:20:39.736 "dma_device_id": "system", 00:20:39.736 "dma_device_type": 1 00:20:39.736 }, 00:20:39.736 { 00:20:39.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.736 "dma_device_type": 2 00:20:39.736 } 00:20:39.736 ], 00:20:39.736 "driver_specific": {} 00:20:39.736 }' 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.736 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:39.995 12:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.254 "name": "BaseBdev3", 00:20:40.254 "aliases": [ 00:20:40.254 "78c109ac-504d-4791-901b-18f8cf989ebb" 00:20:40.254 ], 00:20:40.254 "product_name": "Malloc disk", 00:20:40.254 "block_size": 512, 00:20:40.254 "num_blocks": 65536, 00:20:40.254 "uuid": "78c109ac-504d-4791-901b-18f8cf989ebb", 00:20:40.254 "assigned_rate_limits": { 00:20:40.254 "rw_ios_per_sec": 0, 00:20:40.254 "rw_mbytes_per_sec": 0, 00:20:40.254 "r_mbytes_per_sec": 0, 00:20:40.254 "w_mbytes_per_sec": 0 00:20:40.254 }, 00:20:40.254 "claimed": true, 00:20:40.254 "claim_type": "exclusive_write", 00:20:40.254 "zoned": false, 00:20:40.254 "supported_io_types": { 00:20:40.254 "read": true, 00:20:40.254 "write": true, 00:20:40.254 "unmap": true, 00:20:40.254 "flush": true, 00:20:40.254 "reset": true, 00:20:40.254 "nvme_admin": false, 00:20:40.254 "nvme_io": false, 00:20:40.254 "nvme_io_md": false, 00:20:40.254 "write_zeroes": true, 00:20:40.254 "zcopy": true, 00:20:40.254 "get_zone_info": false, 00:20:40.254 "zone_management": false, 00:20:40.254 "zone_append": false, 00:20:40.254 "compare": false, 00:20:40.254 "compare_and_write": false, 00:20:40.254 "abort": true, 00:20:40.254 "seek_hole": false, 00:20:40.254 "seek_data": false, 00:20:40.254 "copy": true, 00:20:40.254 "nvme_iov_md": false 00:20:40.254 }, 00:20:40.254 "memory_domains": [ 00:20:40.254 { 00:20:40.254 "dma_device_id": "system", 00:20:40.254 "dma_device_type": 1 00:20:40.254 }, 00:20:40.254 { 00:20:40.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.254 "dma_device_type": 2 00:20:40.254 } 00:20:40.254 ], 00:20:40.254 "driver_specific": {} 00:20:40.254 }' 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.254 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:40.513 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.772 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.772 "name": "BaseBdev4", 00:20:40.772 "aliases": [ 00:20:40.772 "c5be3908-39d1-4cca-b79a-ca88e1634a1a" 00:20:40.772 ], 00:20:40.772 "product_name": "Malloc disk", 00:20:40.772 "block_size": 512, 00:20:40.772 "num_blocks": 65536, 00:20:40.772 "uuid": "c5be3908-39d1-4cca-b79a-ca88e1634a1a", 00:20:40.772 "assigned_rate_limits": { 00:20:40.772 "rw_ios_per_sec": 0, 00:20:40.772 "rw_mbytes_per_sec": 0, 00:20:40.772 "r_mbytes_per_sec": 0, 00:20:40.772 "w_mbytes_per_sec": 0 00:20:40.772 }, 00:20:40.772 "claimed": true, 00:20:40.772 "claim_type": "exclusive_write", 00:20:40.772 "zoned": false, 00:20:40.772 "supported_io_types": { 00:20:40.772 "read": true, 00:20:40.772 "write": true, 00:20:40.772 "unmap": true, 00:20:40.772 "flush": true, 00:20:40.772 "reset": true, 00:20:40.772 "nvme_admin": false, 00:20:40.772 "nvme_io": false, 00:20:40.772 "nvme_io_md": false, 00:20:40.772 "write_zeroes": true, 00:20:40.772 "zcopy": true, 00:20:40.772 "get_zone_info": false, 00:20:40.772 "zone_management": false, 00:20:40.772 "zone_append": false, 00:20:40.772 "compare": false, 00:20:40.772 "compare_and_write": false, 00:20:40.772 "abort": true, 00:20:40.772 "seek_hole": false, 00:20:40.772 "seek_data": false, 00:20:40.772 "copy": true, 00:20:40.772 "nvme_iov_md": false 00:20:40.772 }, 00:20:40.772 "memory_domains": [ 00:20:40.772 { 00:20:40.772 "dma_device_id": "system", 00:20:40.772 "dma_device_type": 1 00:20:40.772 }, 00:20:40.772 { 00:20:40.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.772 "dma_device_type": 2 00:20:40.772 } 00:20:40.772 ], 00:20:40.772 "driver_specific": {} 00:20:40.772 }' 00:20:40.772 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.772 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.772 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.772 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.031 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.031 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.031 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.031 12:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.031 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.031 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.031 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.031 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.031 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:41.289 [2024-07-25 12:02:27.289228] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.290 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.549 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.549 "name": "Existed_Raid", 00:20:41.549 "uuid": "aca7ffa9-2650-4dc4-a072-13e328df3daf", 00:20:41.549 "strip_size_kb": 0, 00:20:41.549 "state": "online", 00:20:41.549 "raid_level": "raid1", 00:20:41.549 "superblock": true, 00:20:41.549 "num_base_bdevs": 4, 00:20:41.549 "num_base_bdevs_discovered": 3, 00:20:41.549 "num_base_bdevs_operational": 3, 00:20:41.549 "base_bdevs_list": [ 00:20:41.549 { 00:20:41.549 "name": null, 00:20:41.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.549 "is_configured": false, 00:20:41.549 "data_offset": 2048, 00:20:41.549 "data_size": 63488 00:20:41.549 }, 00:20:41.549 { 00:20:41.549 "name": "BaseBdev2", 00:20:41.549 "uuid": "ed969358-90f4-4726-8a7d-f715aabf2940", 00:20:41.549 "is_configured": true, 00:20:41.549 "data_offset": 2048, 00:20:41.549 "data_size": 63488 00:20:41.549 }, 00:20:41.549 { 00:20:41.549 "name": "BaseBdev3", 00:20:41.549 "uuid": "78c109ac-504d-4791-901b-18f8cf989ebb", 00:20:41.549 "is_configured": true, 00:20:41.549 "data_offset": 2048, 00:20:41.549 "data_size": 63488 00:20:41.549 }, 00:20:41.549 { 00:20:41.549 "name": "BaseBdev4", 00:20:41.549 "uuid": "c5be3908-39d1-4cca-b79a-ca88e1634a1a", 00:20:41.549 "is_configured": true, 00:20:41.549 "data_offset": 2048, 00:20:41.549 "data_size": 63488 00:20:41.549 } 00:20:41.549 ] 00:20:41.549 }' 00:20:41.549 12:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.549 12:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.153 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:42.153 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.153 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.153 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.412 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.412 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.412 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:42.671 [2024-07-25 12:02:28.533594] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.671 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.671 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.671 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.671 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.929 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.930 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.930 12:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:42.930 [2024-07-25 12:02:28.996752] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:42.930 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.930 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.930 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.930 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:43.188 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:43.188 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:43.188 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:43.448 [2024-07-25 12:02:29.460188] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:43.448 [2024-07-25 12:02:29.460268] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.448 [2024-07-25 12:02:29.470631] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.448 [2024-07-25 12:02:29.470660] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.448 [2024-07-25 12:02:29.470670] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e48830 name Existed_Raid, state offline 00:20:43.448 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:43.448 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:43.448 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.448 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:43.707 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:43.707 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:43.707 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:43.707 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:43.707 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:43.707 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.966 BaseBdev2 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:43.966 12:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.225 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.484 [ 00:20:44.484 { 00:20:44.484 "name": "BaseBdev2", 00:20:44.484 "aliases": [ 00:20:44.484 "dc520e4c-b82f-4008-ad40-cb34f81eb0ad" 00:20:44.484 ], 00:20:44.484 "product_name": "Malloc disk", 00:20:44.484 "block_size": 512, 00:20:44.484 "num_blocks": 65536, 00:20:44.484 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:44.484 "assigned_rate_limits": { 00:20:44.484 "rw_ios_per_sec": 0, 00:20:44.484 "rw_mbytes_per_sec": 0, 00:20:44.484 "r_mbytes_per_sec": 0, 00:20:44.484 "w_mbytes_per_sec": 0 00:20:44.484 }, 00:20:44.484 "claimed": false, 00:20:44.484 "zoned": false, 00:20:44.484 "supported_io_types": { 00:20:44.484 "read": true, 00:20:44.484 "write": true, 00:20:44.484 "unmap": true, 00:20:44.484 "flush": true, 00:20:44.484 "reset": true, 00:20:44.484 "nvme_admin": false, 00:20:44.484 "nvme_io": false, 00:20:44.484 "nvme_io_md": false, 00:20:44.484 "write_zeroes": true, 00:20:44.484 "zcopy": true, 00:20:44.484 "get_zone_info": false, 00:20:44.484 "zone_management": false, 00:20:44.484 "zone_append": false, 00:20:44.484 "compare": false, 00:20:44.484 "compare_and_write": false, 00:20:44.484 "abort": true, 00:20:44.484 "seek_hole": false, 00:20:44.484 "seek_data": false, 00:20:44.484 "copy": true, 00:20:44.484 "nvme_iov_md": false 00:20:44.484 }, 00:20:44.484 "memory_domains": [ 00:20:44.484 { 00:20:44.484 "dma_device_id": "system", 00:20:44.484 "dma_device_type": 1 00:20:44.484 }, 00:20:44.484 { 00:20:44.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.484 "dma_device_type": 2 00:20:44.484 } 00:20:44.484 ], 00:20:44.484 "driver_specific": {} 00:20:44.484 } 00:20:44.484 ] 00:20:44.484 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:44.484 12:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.484 12:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.484 12:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:44.743 BaseBdev3 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:44.743 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.002 12:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:45.002 [ 00:20:45.002 { 00:20:45.002 "name": "BaseBdev3", 00:20:45.002 "aliases": [ 00:20:45.002 "d148b551-7251-4207-b696-db1d0e472bfe" 00:20:45.002 ], 00:20:45.002 "product_name": "Malloc disk", 00:20:45.002 "block_size": 512, 00:20:45.002 "num_blocks": 65536, 00:20:45.002 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:45.002 "assigned_rate_limits": { 00:20:45.002 "rw_ios_per_sec": 0, 00:20:45.002 "rw_mbytes_per_sec": 0, 00:20:45.002 "r_mbytes_per_sec": 0, 00:20:45.002 "w_mbytes_per_sec": 0 00:20:45.002 }, 00:20:45.002 "claimed": false, 00:20:45.003 "zoned": false, 00:20:45.003 "supported_io_types": { 00:20:45.003 "read": true, 00:20:45.003 "write": true, 00:20:45.003 "unmap": true, 00:20:45.003 "flush": true, 00:20:45.003 "reset": true, 00:20:45.003 "nvme_admin": false, 00:20:45.003 "nvme_io": false, 00:20:45.003 "nvme_io_md": false, 00:20:45.003 "write_zeroes": true, 00:20:45.003 "zcopy": true, 00:20:45.003 "get_zone_info": false, 00:20:45.003 "zone_management": false, 00:20:45.003 "zone_append": false, 00:20:45.003 "compare": false, 00:20:45.003 "compare_and_write": false, 00:20:45.003 "abort": true, 00:20:45.003 "seek_hole": false, 00:20:45.003 "seek_data": false, 00:20:45.003 "copy": true, 00:20:45.003 "nvme_iov_md": false 00:20:45.003 }, 00:20:45.003 "memory_domains": [ 00:20:45.003 { 00:20:45.003 "dma_device_id": "system", 00:20:45.003 "dma_device_type": 1 00:20:45.003 }, 00:20:45.003 { 00:20:45.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.003 "dma_device_type": 2 00:20:45.003 } 00:20:45.003 ], 00:20:45.003 "driver_specific": {} 00:20:45.003 } 00:20:45.003 ] 00:20:45.003 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:45.003 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:45.003 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:45.003 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:45.261 BaseBdev4 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.261 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.520 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:45.779 [ 00:20:45.779 { 00:20:45.779 "name": "BaseBdev4", 00:20:45.779 "aliases": [ 00:20:45.779 "83049a72-8c9e-423b-ae85-350fb30f34c0" 00:20:45.779 ], 00:20:45.779 "product_name": "Malloc disk", 00:20:45.779 "block_size": 512, 00:20:45.779 "num_blocks": 65536, 00:20:45.779 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:45.779 "assigned_rate_limits": { 00:20:45.779 "rw_ios_per_sec": 0, 00:20:45.779 "rw_mbytes_per_sec": 0, 00:20:45.779 "r_mbytes_per_sec": 0, 00:20:45.779 "w_mbytes_per_sec": 0 00:20:45.779 }, 00:20:45.779 "claimed": false, 00:20:45.779 "zoned": false, 00:20:45.779 "supported_io_types": { 00:20:45.779 "read": true, 00:20:45.779 "write": true, 00:20:45.779 "unmap": true, 00:20:45.779 "flush": true, 00:20:45.779 "reset": true, 00:20:45.779 "nvme_admin": false, 00:20:45.779 "nvme_io": false, 00:20:45.779 "nvme_io_md": false, 00:20:45.779 "write_zeroes": true, 00:20:45.779 "zcopy": true, 00:20:45.779 "get_zone_info": false, 00:20:45.779 "zone_management": false, 00:20:45.779 "zone_append": false, 00:20:45.779 "compare": false, 00:20:45.779 "compare_and_write": false, 00:20:45.779 "abort": true, 00:20:45.779 "seek_hole": false, 00:20:45.779 "seek_data": false, 00:20:45.779 "copy": true, 00:20:45.779 "nvme_iov_md": false 00:20:45.779 }, 00:20:45.779 "memory_domains": [ 00:20:45.779 { 00:20:45.779 "dma_device_id": "system", 00:20:45.779 "dma_device_type": 1 00:20:45.779 }, 00:20:45.779 { 00:20:45.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.779 "dma_device_type": 2 00:20:45.779 } 00:20:45.779 ], 00:20:45.779 "driver_specific": {} 00:20:45.779 } 00:20:45.779 ] 00:20:45.779 12:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:45.779 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:45.779 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:45.779 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:46.038 [2024-07-25 12:02:31.970013] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.038 [2024-07-25 12:02:31.970052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.038 [2024-07-25 12:02:31.970069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.038 [2024-07-25 12:02:31.971279] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.039 [2024-07-25 12:02:31.971319] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.039 12:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.339 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.339 "name": "Existed_Raid", 00:20:46.339 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:46.339 "strip_size_kb": 0, 00:20:46.339 "state": "configuring", 00:20:46.339 "raid_level": "raid1", 00:20:46.339 "superblock": true, 00:20:46.339 "num_base_bdevs": 4, 00:20:46.339 "num_base_bdevs_discovered": 3, 00:20:46.339 "num_base_bdevs_operational": 4, 00:20:46.339 "base_bdevs_list": [ 00:20:46.339 { 00:20:46.339 "name": "BaseBdev1", 00:20:46.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.339 "is_configured": false, 00:20:46.339 "data_offset": 0, 00:20:46.339 "data_size": 0 00:20:46.339 }, 00:20:46.339 { 00:20:46.339 "name": "BaseBdev2", 00:20:46.339 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:46.339 "is_configured": true, 00:20:46.339 "data_offset": 2048, 00:20:46.339 "data_size": 63488 00:20:46.339 }, 00:20:46.339 { 00:20:46.339 "name": "BaseBdev3", 00:20:46.339 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:46.339 "is_configured": true, 00:20:46.339 "data_offset": 2048, 00:20:46.339 "data_size": 63488 00:20:46.339 }, 00:20:46.339 { 00:20:46.339 "name": "BaseBdev4", 00:20:46.339 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:46.339 "is_configured": true, 00:20:46.339 "data_offset": 2048, 00:20:46.339 "data_size": 63488 00:20:46.339 } 00:20:46.339 ] 00:20:46.339 }' 00:20:46.339 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.339 12:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:46.919 [2024-07-25 12:02:32.972629] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.919 12:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.178 12:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.178 "name": "Existed_Raid", 00:20:47.178 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:47.178 "strip_size_kb": 0, 00:20:47.178 "state": "configuring", 00:20:47.178 "raid_level": "raid1", 00:20:47.178 "superblock": true, 00:20:47.178 "num_base_bdevs": 4, 00:20:47.178 "num_base_bdevs_discovered": 2, 00:20:47.178 "num_base_bdevs_operational": 4, 00:20:47.178 "base_bdevs_list": [ 00:20:47.178 { 00:20:47.178 "name": "BaseBdev1", 00:20:47.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.178 "is_configured": false, 00:20:47.178 "data_offset": 0, 00:20:47.178 "data_size": 0 00:20:47.178 }, 00:20:47.178 { 00:20:47.178 "name": null, 00:20:47.178 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:47.178 "is_configured": false, 00:20:47.178 "data_offset": 2048, 00:20:47.178 "data_size": 63488 00:20:47.178 }, 00:20:47.178 { 00:20:47.178 "name": "BaseBdev3", 00:20:47.178 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:47.178 "is_configured": true, 00:20:47.178 "data_offset": 2048, 00:20:47.178 "data_size": 63488 00:20:47.178 }, 00:20:47.178 { 00:20:47.178 "name": "BaseBdev4", 00:20:47.178 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:47.178 "is_configured": true, 00:20:47.178 "data_offset": 2048, 00:20:47.178 "data_size": 63488 00:20:47.178 } 00:20:47.178 ] 00:20:47.178 }' 00:20:47.178 12:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.178 12:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.743 12:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.743 12:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:48.001 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:48.002 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:48.260 [2024-07-25 12:02:34.247192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.260 BaseBdev1 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:48.260 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.518 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.777 [ 00:20:48.777 { 00:20:48.777 "name": "BaseBdev1", 00:20:48.777 "aliases": [ 00:20:48.777 "a75a99d0-08ff-48df-8b74-d73d32c8b8b4" 00:20:48.777 ], 00:20:48.777 "product_name": "Malloc disk", 00:20:48.777 "block_size": 512, 00:20:48.777 "num_blocks": 65536, 00:20:48.777 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:48.777 "assigned_rate_limits": { 00:20:48.777 "rw_ios_per_sec": 0, 00:20:48.777 "rw_mbytes_per_sec": 0, 00:20:48.777 "r_mbytes_per_sec": 0, 00:20:48.777 "w_mbytes_per_sec": 0 00:20:48.777 }, 00:20:48.777 "claimed": true, 00:20:48.777 "claim_type": "exclusive_write", 00:20:48.777 "zoned": false, 00:20:48.777 "supported_io_types": { 00:20:48.777 "read": true, 00:20:48.777 "write": true, 00:20:48.777 "unmap": true, 00:20:48.777 "flush": true, 00:20:48.777 "reset": true, 00:20:48.777 "nvme_admin": false, 00:20:48.777 "nvme_io": false, 00:20:48.777 "nvme_io_md": false, 00:20:48.777 "write_zeroes": true, 00:20:48.777 "zcopy": true, 00:20:48.777 "get_zone_info": false, 00:20:48.777 "zone_management": false, 00:20:48.777 "zone_append": false, 00:20:48.777 "compare": false, 00:20:48.777 "compare_and_write": false, 00:20:48.777 "abort": true, 00:20:48.777 "seek_hole": false, 00:20:48.777 "seek_data": false, 00:20:48.777 "copy": true, 00:20:48.777 "nvme_iov_md": false 00:20:48.777 }, 00:20:48.777 "memory_domains": [ 00:20:48.777 { 00:20:48.777 "dma_device_id": "system", 00:20:48.777 "dma_device_type": 1 00:20:48.777 }, 00:20:48.777 { 00:20:48.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.777 "dma_device_type": 2 00:20:48.777 } 00:20:48.777 ], 00:20:48.777 "driver_specific": {} 00:20:48.777 } 00:20:48.777 ] 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.777 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.035 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.035 "name": "Existed_Raid", 00:20:49.035 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:49.035 "strip_size_kb": 0, 00:20:49.035 "state": "configuring", 00:20:49.035 "raid_level": "raid1", 00:20:49.035 "superblock": true, 00:20:49.035 "num_base_bdevs": 4, 00:20:49.035 "num_base_bdevs_discovered": 3, 00:20:49.035 "num_base_bdevs_operational": 4, 00:20:49.035 "base_bdevs_list": [ 00:20:49.035 { 00:20:49.035 "name": "BaseBdev1", 00:20:49.035 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:49.035 "is_configured": true, 00:20:49.035 "data_offset": 2048, 00:20:49.035 "data_size": 63488 00:20:49.035 }, 00:20:49.035 { 00:20:49.035 "name": null, 00:20:49.035 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:49.035 "is_configured": false, 00:20:49.035 "data_offset": 2048, 00:20:49.036 "data_size": 63488 00:20:49.036 }, 00:20:49.036 { 00:20:49.036 "name": "BaseBdev3", 00:20:49.036 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:49.036 "is_configured": true, 00:20:49.036 "data_offset": 2048, 00:20:49.036 "data_size": 63488 00:20:49.036 }, 00:20:49.036 { 00:20:49.036 "name": "BaseBdev4", 00:20:49.036 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:49.036 "is_configured": true, 00:20:49.036 "data_offset": 2048, 00:20:49.036 "data_size": 63488 00:20:49.036 } 00:20:49.036 ] 00:20:49.036 }' 00:20:49.036 12:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.036 12:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.602 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.602 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:49.860 [2024-07-25 12:02:35.943881] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.860 12:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.118 12:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.118 "name": "Existed_Raid", 00:20:50.119 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:50.119 "strip_size_kb": 0, 00:20:50.119 "state": "configuring", 00:20:50.119 "raid_level": "raid1", 00:20:50.119 "superblock": true, 00:20:50.119 "num_base_bdevs": 4, 00:20:50.119 "num_base_bdevs_discovered": 2, 00:20:50.119 "num_base_bdevs_operational": 4, 00:20:50.119 "base_bdevs_list": [ 00:20:50.119 { 00:20:50.119 "name": "BaseBdev1", 00:20:50.119 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:50.119 "is_configured": true, 00:20:50.119 "data_offset": 2048, 00:20:50.119 "data_size": 63488 00:20:50.119 }, 00:20:50.119 { 00:20:50.119 "name": null, 00:20:50.119 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:50.119 "is_configured": false, 00:20:50.119 "data_offset": 2048, 00:20:50.119 "data_size": 63488 00:20:50.119 }, 00:20:50.119 { 00:20:50.119 "name": null, 00:20:50.119 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:50.119 "is_configured": false, 00:20:50.119 "data_offset": 2048, 00:20:50.119 "data_size": 63488 00:20:50.119 }, 00:20:50.119 { 00:20:50.119 "name": "BaseBdev4", 00:20:50.119 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:50.119 "is_configured": true, 00:20:50.119 "data_offset": 2048, 00:20:50.119 "data_size": 63488 00:20:50.119 } 00:20:50.119 ] 00:20:50.119 }' 00:20:50.119 12:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.119 12:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.684 12:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.684 12:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:50.941 12:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:50.941 12:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:51.198 [2024-07-25 12:02:37.106971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.198 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.456 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.456 "name": "Existed_Raid", 00:20:51.456 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:51.456 "strip_size_kb": 0, 00:20:51.456 "state": "configuring", 00:20:51.456 "raid_level": "raid1", 00:20:51.456 "superblock": true, 00:20:51.456 "num_base_bdevs": 4, 00:20:51.456 "num_base_bdevs_discovered": 3, 00:20:51.456 "num_base_bdevs_operational": 4, 00:20:51.456 "base_bdevs_list": [ 00:20:51.456 { 00:20:51.456 "name": "BaseBdev1", 00:20:51.456 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:51.456 "is_configured": true, 00:20:51.456 "data_offset": 2048, 00:20:51.456 "data_size": 63488 00:20:51.456 }, 00:20:51.456 { 00:20:51.456 "name": null, 00:20:51.456 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:51.456 "is_configured": false, 00:20:51.456 "data_offset": 2048, 00:20:51.456 "data_size": 63488 00:20:51.456 }, 00:20:51.456 { 00:20:51.456 "name": "BaseBdev3", 00:20:51.456 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:51.456 "is_configured": true, 00:20:51.456 "data_offset": 2048, 00:20:51.456 "data_size": 63488 00:20:51.456 }, 00:20:51.456 { 00:20:51.456 "name": "BaseBdev4", 00:20:51.456 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:51.456 "is_configured": true, 00:20:51.456 "data_offset": 2048, 00:20:51.456 "data_size": 63488 00:20:51.456 } 00:20:51.456 ] 00:20:51.456 }' 00:20:51.456 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.456 12:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.022 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.022 12:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:52.281 [2024-07-25 12:02:38.358300] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.281 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.539 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.539 "name": "Existed_Raid", 00:20:52.539 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:52.539 "strip_size_kb": 0, 00:20:52.539 "state": "configuring", 00:20:52.539 "raid_level": "raid1", 00:20:52.539 "superblock": true, 00:20:52.539 "num_base_bdevs": 4, 00:20:52.539 "num_base_bdevs_discovered": 2, 00:20:52.539 "num_base_bdevs_operational": 4, 00:20:52.539 "base_bdevs_list": [ 00:20:52.539 { 00:20:52.539 "name": null, 00:20:52.539 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:52.539 "is_configured": false, 00:20:52.539 "data_offset": 2048, 00:20:52.539 "data_size": 63488 00:20:52.539 }, 00:20:52.539 { 00:20:52.539 "name": null, 00:20:52.539 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:52.539 "is_configured": false, 00:20:52.539 "data_offset": 2048, 00:20:52.539 "data_size": 63488 00:20:52.539 }, 00:20:52.539 { 00:20:52.539 "name": "BaseBdev3", 00:20:52.539 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:52.539 "is_configured": true, 00:20:52.539 "data_offset": 2048, 00:20:52.539 "data_size": 63488 00:20:52.539 }, 00:20:52.539 { 00:20:52.539 "name": "BaseBdev4", 00:20:52.539 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:52.539 "is_configured": true, 00:20:52.539 "data_offset": 2048, 00:20:52.539 "data_size": 63488 00:20:52.539 } 00:20:52.539 ] 00:20:52.539 }' 00:20:52.539 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.539 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.105 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:53.105 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.363 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:53.363 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:53.621 [2024-07-25 12:02:39.616043] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.621 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.879 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.879 "name": "Existed_Raid", 00:20:53.879 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:53.879 "strip_size_kb": 0, 00:20:53.879 "state": "configuring", 00:20:53.879 "raid_level": "raid1", 00:20:53.879 "superblock": true, 00:20:53.879 "num_base_bdevs": 4, 00:20:53.879 "num_base_bdevs_discovered": 3, 00:20:53.879 "num_base_bdevs_operational": 4, 00:20:53.879 "base_bdevs_list": [ 00:20:53.879 { 00:20:53.879 "name": null, 00:20:53.879 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:53.879 "is_configured": false, 00:20:53.879 "data_offset": 2048, 00:20:53.879 "data_size": 63488 00:20:53.879 }, 00:20:53.879 { 00:20:53.879 "name": "BaseBdev2", 00:20:53.879 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:53.879 "is_configured": true, 00:20:53.879 "data_offset": 2048, 00:20:53.879 "data_size": 63488 00:20:53.879 }, 00:20:53.879 { 00:20:53.879 "name": "BaseBdev3", 00:20:53.879 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:53.879 "is_configured": true, 00:20:53.879 "data_offset": 2048, 00:20:53.879 "data_size": 63488 00:20:53.879 }, 00:20:53.879 { 00:20:53.879 "name": "BaseBdev4", 00:20:53.879 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:53.879 "is_configured": true, 00:20:53.879 "data_offset": 2048, 00:20:53.879 "data_size": 63488 00:20:53.879 } 00:20:53.879 ] 00:20:53.879 }' 00:20:53.879 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.879 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.444 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.445 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:54.703 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:54.703 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.703 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:54.961 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a75a99d0-08ff-48df-8b74-d73d32c8b8b4 00:20:55.219 [2024-07-25 12:02:41.091049] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:55.219 [2024-07-25 12:02:41.091208] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e3ff40 00:20:55.219 [2024-07-25 12:02:41.091221] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:55.219 [2024-07-25 12:02:41.091383] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e34690 00:20:55.219 [2024-07-25 12:02:41.091500] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e3ff40 00:20:55.219 [2024-07-25 12:02:41.091509] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1e3ff40 00:20:55.219 [2024-07-25 12:02:41.091593] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.219 NewBaseBdev 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.219 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:55.477 [ 00:20:55.477 { 00:20:55.477 "name": "NewBaseBdev", 00:20:55.477 "aliases": [ 00:20:55.477 "a75a99d0-08ff-48df-8b74-d73d32c8b8b4" 00:20:55.477 ], 00:20:55.477 "product_name": "Malloc disk", 00:20:55.477 "block_size": 512, 00:20:55.477 "num_blocks": 65536, 00:20:55.477 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:55.477 "assigned_rate_limits": { 00:20:55.477 "rw_ios_per_sec": 0, 00:20:55.477 "rw_mbytes_per_sec": 0, 00:20:55.477 "r_mbytes_per_sec": 0, 00:20:55.477 "w_mbytes_per_sec": 0 00:20:55.477 }, 00:20:55.477 "claimed": true, 00:20:55.477 "claim_type": "exclusive_write", 00:20:55.477 "zoned": false, 00:20:55.477 "supported_io_types": { 00:20:55.477 "read": true, 00:20:55.477 "write": true, 00:20:55.477 "unmap": true, 00:20:55.477 "flush": true, 00:20:55.477 "reset": true, 00:20:55.477 "nvme_admin": false, 00:20:55.477 "nvme_io": false, 00:20:55.477 "nvme_io_md": false, 00:20:55.477 "write_zeroes": true, 00:20:55.477 "zcopy": true, 00:20:55.477 "get_zone_info": false, 00:20:55.477 "zone_management": false, 00:20:55.477 "zone_append": false, 00:20:55.477 "compare": false, 00:20:55.477 "compare_and_write": false, 00:20:55.477 "abort": true, 00:20:55.477 "seek_hole": false, 00:20:55.477 "seek_data": false, 00:20:55.477 "copy": true, 00:20:55.477 "nvme_iov_md": false 00:20:55.477 }, 00:20:55.477 "memory_domains": [ 00:20:55.477 { 00:20:55.477 "dma_device_id": "system", 00:20:55.477 "dma_device_type": 1 00:20:55.477 }, 00:20:55.477 { 00:20:55.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.477 "dma_device_type": 2 00:20:55.477 } 00:20:55.477 ], 00:20:55.477 "driver_specific": {} 00:20:55.477 } 00:20:55.477 ] 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.477 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.737 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.737 "name": "Existed_Raid", 00:20:55.737 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:55.737 "strip_size_kb": 0, 00:20:55.737 "state": "online", 00:20:55.737 "raid_level": "raid1", 00:20:55.737 "superblock": true, 00:20:55.737 "num_base_bdevs": 4, 00:20:55.737 "num_base_bdevs_discovered": 4, 00:20:55.737 "num_base_bdevs_operational": 4, 00:20:55.737 "base_bdevs_list": [ 00:20:55.737 { 00:20:55.737 "name": "NewBaseBdev", 00:20:55.737 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:55.737 "is_configured": true, 00:20:55.737 "data_offset": 2048, 00:20:55.737 "data_size": 63488 00:20:55.737 }, 00:20:55.737 { 00:20:55.737 "name": "BaseBdev2", 00:20:55.737 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:55.737 "is_configured": true, 00:20:55.737 "data_offset": 2048, 00:20:55.737 "data_size": 63488 00:20:55.737 }, 00:20:55.737 { 00:20:55.737 "name": "BaseBdev3", 00:20:55.737 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:55.737 "is_configured": true, 00:20:55.737 "data_offset": 2048, 00:20:55.737 "data_size": 63488 00:20:55.737 }, 00:20:55.737 { 00:20:55.737 "name": "BaseBdev4", 00:20:55.737 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:55.737 "is_configured": true, 00:20:55.737 "data_offset": 2048, 00:20:55.737 "data_size": 63488 00:20:55.737 } 00:20:55.737 ] 00:20:55.737 }' 00:20:55.737 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.737 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:56.302 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:56.560 [2024-07-25 12:02:42.587325] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.560 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:56.560 "name": "Existed_Raid", 00:20:56.560 "aliases": [ 00:20:56.560 "623618c6-34ee-4208-bc7e-df2f215bb0b7" 00:20:56.560 ], 00:20:56.560 "product_name": "Raid Volume", 00:20:56.560 "block_size": 512, 00:20:56.560 "num_blocks": 63488, 00:20:56.560 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:56.560 "assigned_rate_limits": { 00:20:56.560 "rw_ios_per_sec": 0, 00:20:56.560 "rw_mbytes_per_sec": 0, 00:20:56.560 "r_mbytes_per_sec": 0, 00:20:56.560 "w_mbytes_per_sec": 0 00:20:56.560 }, 00:20:56.560 "claimed": false, 00:20:56.560 "zoned": false, 00:20:56.560 "supported_io_types": { 00:20:56.560 "read": true, 00:20:56.560 "write": true, 00:20:56.560 "unmap": false, 00:20:56.560 "flush": false, 00:20:56.560 "reset": true, 00:20:56.560 "nvme_admin": false, 00:20:56.560 "nvme_io": false, 00:20:56.560 "nvme_io_md": false, 00:20:56.560 "write_zeroes": true, 00:20:56.560 "zcopy": false, 00:20:56.560 "get_zone_info": false, 00:20:56.560 "zone_management": false, 00:20:56.560 "zone_append": false, 00:20:56.560 "compare": false, 00:20:56.560 "compare_and_write": false, 00:20:56.560 "abort": false, 00:20:56.560 "seek_hole": false, 00:20:56.560 "seek_data": false, 00:20:56.560 "copy": false, 00:20:56.560 "nvme_iov_md": false 00:20:56.560 }, 00:20:56.560 "memory_domains": [ 00:20:56.560 { 00:20:56.560 "dma_device_id": "system", 00:20:56.560 "dma_device_type": 1 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.560 "dma_device_type": 2 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "system", 00:20:56.560 "dma_device_type": 1 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.560 "dma_device_type": 2 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "system", 00:20:56.560 "dma_device_type": 1 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.560 "dma_device_type": 2 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "system", 00:20:56.560 "dma_device_type": 1 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.560 "dma_device_type": 2 00:20:56.560 } 00:20:56.560 ], 00:20:56.560 "driver_specific": { 00:20:56.560 "raid": { 00:20:56.560 "uuid": "623618c6-34ee-4208-bc7e-df2f215bb0b7", 00:20:56.560 "strip_size_kb": 0, 00:20:56.560 "state": "online", 00:20:56.560 "raid_level": "raid1", 00:20:56.560 "superblock": true, 00:20:56.560 "num_base_bdevs": 4, 00:20:56.560 "num_base_bdevs_discovered": 4, 00:20:56.560 "num_base_bdevs_operational": 4, 00:20:56.560 "base_bdevs_list": [ 00:20:56.560 { 00:20:56.560 "name": "NewBaseBdev", 00:20:56.560 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:56.560 "is_configured": true, 00:20:56.560 "data_offset": 2048, 00:20:56.560 "data_size": 63488 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "name": "BaseBdev2", 00:20:56.560 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:56.560 "is_configured": true, 00:20:56.560 "data_offset": 2048, 00:20:56.560 "data_size": 63488 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "name": "BaseBdev3", 00:20:56.560 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:56.560 "is_configured": true, 00:20:56.560 "data_offset": 2048, 00:20:56.560 "data_size": 63488 00:20:56.560 }, 00:20:56.560 { 00:20:56.560 "name": "BaseBdev4", 00:20:56.560 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:56.560 "is_configured": true, 00:20:56.560 "data_offset": 2048, 00:20:56.560 "data_size": 63488 00:20:56.560 } 00:20:56.560 ] 00:20:56.560 } 00:20:56.560 } 00:20:56.560 }' 00:20:56.560 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.560 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:56.560 BaseBdev2 00:20:56.560 BaseBdev3 00:20:56.560 BaseBdev4' 00:20:56.560 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.560 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:56.560 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.818 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.818 "name": "NewBaseBdev", 00:20:56.818 "aliases": [ 00:20:56.818 "a75a99d0-08ff-48df-8b74-d73d32c8b8b4" 00:20:56.818 ], 00:20:56.818 "product_name": "Malloc disk", 00:20:56.818 "block_size": 512, 00:20:56.818 "num_blocks": 65536, 00:20:56.819 "uuid": "a75a99d0-08ff-48df-8b74-d73d32c8b8b4", 00:20:56.819 "assigned_rate_limits": { 00:20:56.819 "rw_ios_per_sec": 0, 00:20:56.819 "rw_mbytes_per_sec": 0, 00:20:56.819 "r_mbytes_per_sec": 0, 00:20:56.819 "w_mbytes_per_sec": 0 00:20:56.819 }, 00:20:56.819 "claimed": true, 00:20:56.819 "claim_type": "exclusive_write", 00:20:56.819 "zoned": false, 00:20:56.819 "supported_io_types": { 00:20:56.819 "read": true, 00:20:56.819 "write": true, 00:20:56.819 "unmap": true, 00:20:56.819 "flush": true, 00:20:56.819 "reset": true, 00:20:56.819 "nvme_admin": false, 00:20:56.819 "nvme_io": false, 00:20:56.819 "nvme_io_md": false, 00:20:56.819 "write_zeroes": true, 00:20:56.819 "zcopy": true, 00:20:56.819 "get_zone_info": false, 00:20:56.819 "zone_management": false, 00:20:56.819 "zone_append": false, 00:20:56.819 "compare": false, 00:20:56.819 "compare_and_write": false, 00:20:56.819 "abort": true, 00:20:56.819 "seek_hole": false, 00:20:56.819 "seek_data": false, 00:20:56.819 "copy": true, 00:20:56.819 "nvme_iov_md": false 00:20:56.819 }, 00:20:56.819 "memory_domains": [ 00:20:56.819 { 00:20:56.819 "dma_device_id": "system", 00:20:56.819 "dma_device_type": 1 00:20:56.819 }, 00:20:56.819 { 00:20:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.819 "dma_device_type": 2 00:20:56.819 } 00:20:56.819 ], 00:20:56.819 "driver_specific": {} 00:20:56.819 }' 00:20:56.819 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.819 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.076 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.076 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.076 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.335 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.335 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.335 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.335 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:57.335 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.335 "name": "BaseBdev2", 00:20:57.335 "aliases": [ 00:20:57.335 "dc520e4c-b82f-4008-ad40-cb34f81eb0ad" 00:20:57.335 ], 00:20:57.335 "product_name": "Malloc disk", 00:20:57.335 "block_size": 512, 00:20:57.335 "num_blocks": 65536, 00:20:57.335 "uuid": "dc520e4c-b82f-4008-ad40-cb34f81eb0ad", 00:20:57.335 "assigned_rate_limits": { 00:20:57.335 "rw_ios_per_sec": 0, 00:20:57.335 "rw_mbytes_per_sec": 0, 00:20:57.335 "r_mbytes_per_sec": 0, 00:20:57.335 "w_mbytes_per_sec": 0 00:20:57.335 }, 00:20:57.335 "claimed": true, 00:20:57.335 "claim_type": "exclusive_write", 00:20:57.335 "zoned": false, 00:20:57.335 "supported_io_types": { 00:20:57.335 "read": true, 00:20:57.335 "write": true, 00:20:57.335 "unmap": true, 00:20:57.335 "flush": true, 00:20:57.335 "reset": true, 00:20:57.335 "nvme_admin": false, 00:20:57.335 "nvme_io": false, 00:20:57.335 "nvme_io_md": false, 00:20:57.335 "write_zeroes": true, 00:20:57.335 "zcopy": true, 00:20:57.335 "get_zone_info": false, 00:20:57.335 "zone_management": false, 00:20:57.335 "zone_append": false, 00:20:57.335 "compare": false, 00:20:57.335 "compare_and_write": false, 00:20:57.335 "abort": true, 00:20:57.335 "seek_hole": false, 00:20:57.335 "seek_data": false, 00:20:57.335 "copy": true, 00:20:57.335 "nvme_iov_md": false 00:20:57.335 }, 00:20:57.335 "memory_domains": [ 00:20:57.335 { 00:20:57.335 "dma_device_id": "system", 00:20:57.335 "dma_device_type": 1 00:20:57.335 }, 00:20:57.335 { 00:20:57.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.335 "dma_device_type": 2 00:20:57.335 } 00:20:57.335 ], 00:20:57.335 "driver_specific": {} 00:20:57.335 }' 00:20:57.335 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.594 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.852 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.853 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.853 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.853 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:57.853 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.110 "name": "BaseBdev3", 00:20:58.110 "aliases": [ 00:20:58.110 "d148b551-7251-4207-b696-db1d0e472bfe" 00:20:58.110 ], 00:20:58.110 "product_name": "Malloc disk", 00:20:58.110 "block_size": 512, 00:20:58.110 "num_blocks": 65536, 00:20:58.110 "uuid": "d148b551-7251-4207-b696-db1d0e472bfe", 00:20:58.110 "assigned_rate_limits": { 00:20:58.110 "rw_ios_per_sec": 0, 00:20:58.110 "rw_mbytes_per_sec": 0, 00:20:58.110 "r_mbytes_per_sec": 0, 00:20:58.110 "w_mbytes_per_sec": 0 00:20:58.110 }, 00:20:58.110 "claimed": true, 00:20:58.110 "claim_type": "exclusive_write", 00:20:58.110 "zoned": false, 00:20:58.110 "supported_io_types": { 00:20:58.110 "read": true, 00:20:58.110 "write": true, 00:20:58.110 "unmap": true, 00:20:58.110 "flush": true, 00:20:58.110 "reset": true, 00:20:58.110 "nvme_admin": false, 00:20:58.110 "nvme_io": false, 00:20:58.110 "nvme_io_md": false, 00:20:58.110 "write_zeroes": true, 00:20:58.110 "zcopy": true, 00:20:58.110 "get_zone_info": false, 00:20:58.110 "zone_management": false, 00:20:58.110 "zone_append": false, 00:20:58.110 "compare": false, 00:20:58.110 "compare_and_write": false, 00:20:58.110 "abort": true, 00:20:58.110 "seek_hole": false, 00:20:58.110 "seek_data": false, 00:20:58.110 "copy": true, 00:20:58.110 "nvme_iov_md": false 00:20:58.110 }, 00:20:58.110 "memory_domains": [ 00:20:58.110 { 00:20:58.110 "dma_device_id": "system", 00:20:58.110 "dma_device_type": 1 00:20:58.110 }, 00:20:58.110 { 00:20:58.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.110 "dma_device_type": 2 00:20:58.110 } 00:20:58.110 ], 00:20:58.110 "driver_specific": {} 00:20:58.110 }' 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.110 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:58.368 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.627 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.627 "name": "BaseBdev4", 00:20:58.627 "aliases": [ 00:20:58.627 "83049a72-8c9e-423b-ae85-350fb30f34c0" 00:20:58.627 ], 00:20:58.627 "product_name": "Malloc disk", 00:20:58.627 "block_size": 512, 00:20:58.627 "num_blocks": 65536, 00:20:58.627 "uuid": "83049a72-8c9e-423b-ae85-350fb30f34c0", 00:20:58.627 "assigned_rate_limits": { 00:20:58.627 "rw_ios_per_sec": 0, 00:20:58.627 "rw_mbytes_per_sec": 0, 00:20:58.627 "r_mbytes_per_sec": 0, 00:20:58.627 "w_mbytes_per_sec": 0 00:20:58.627 }, 00:20:58.627 "claimed": true, 00:20:58.627 "claim_type": "exclusive_write", 00:20:58.627 "zoned": false, 00:20:58.627 "supported_io_types": { 00:20:58.627 "read": true, 00:20:58.627 "write": true, 00:20:58.627 "unmap": true, 00:20:58.627 "flush": true, 00:20:58.627 "reset": true, 00:20:58.627 "nvme_admin": false, 00:20:58.627 "nvme_io": false, 00:20:58.627 "nvme_io_md": false, 00:20:58.627 "write_zeroes": true, 00:20:58.627 "zcopy": true, 00:20:58.627 "get_zone_info": false, 00:20:58.627 "zone_management": false, 00:20:58.627 "zone_append": false, 00:20:58.627 "compare": false, 00:20:58.627 "compare_and_write": false, 00:20:58.627 "abort": true, 00:20:58.627 "seek_hole": false, 00:20:58.627 "seek_data": false, 00:20:58.627 "copy": true, 00:20:58.627 "nvme_iov_md": false 00:20:58.627 }, 00:20:58.627 "memory_domains": [ 00:20:58.627 { 00:20:58.627 "dma_device_id": "system", 00:20:58.627 "dma_device_type": 1 00:20:58.627 }, 00:20:58.627 { 00:20:58.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.627 "dma_device_type": 2 00:20:58.627 } 00:20:58.627 ], 00:20:58.627 "driver_specific": {} 00:20:58.627 }' 00:20:58.627 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.627 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.627 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.627 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.627 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.886 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:59.145 [2024-07-25 12:02:45.145784] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.145 [2024-07-25 12:02:45.145810] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.145 [2024-07-25 12:02:45.145859] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.145 [2024-07-25 12:02:45.146107] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.145 [2024-07-25 12:02:45.146118] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e3ff40 name Existed_Raid, state offline 00:20:59.145 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 12935 00:20:59.145 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 12935 ']' 00:20:59.145 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 12935 00:20:59.145 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 12935 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 12935' 00:20:59.146 killing process with pid 12935 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 12935 00:20:59.146 [2024-07-25 12:02:45.226132] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.146 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 12935 00:20:59.146 [2024-07-25 12:02:45.258281] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:59.435 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:59.435 00:20:59.435 real 0m30.341s 00:20:59.435 user 0m55.719s 00:20:59.435 sys 0m5.472s 00:20:59.435 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.435 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.435 ************************************ 00:20:59.435 END TEST raid_state_function_test_sb 00:20:59.435 ************************************ 00:20:59.435 12:02:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:59.435 12:02:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:59.435 12:02:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.435 12:02:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:59.435 ************************************ 00:20:59.435 START TEST raid_superblock_test 00:20:59.435 ************************************ 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=18671 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 18671 /var/tmp/spdk-raid.sock 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:59.435 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 18671 ']' 00:20:59.436 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:59.436 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.436 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:59.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:59.436 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.436 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.695 [2024-07-25 12:02:45.581048] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:59.695 [2024-07-25 12:02:45.581106] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18671 ] 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:59.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.695 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:59.695 [2024-07-25 12:02:45.712680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.695 [2024-07-25 12:02:45.799567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.954 [2024-07-25 12:02:45.858340] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.954 [2024-07-25 12:02:45.858401] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.521 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:00.779 malloc1 00:21:00.779 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:01.038 [2024-07-25 12:02:46.916200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:01.038 [2024-07-25 12:02:46.916244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.038 [2024-07-25 12:02:46.916262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22672f0 00:21:01.038 [2024-07-25 12:02:46.916274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.038 [2024-07-25 12:02:46.917797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.038 [2024-07-25 12:02:46.917825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:01.038 pt1 00:21:01.038 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:01.038 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:01.038 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:01.038 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:01.039 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:01.039 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.039 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.039 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.039 12:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:01.039 malloc2 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:01.297 [2024-07-25 12:02:47.373801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:01.297 [2024-07-25 12:02:47.373844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.297 [2024-07-25 12:02:47.373859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22686d0 00:21:01.297 [2024-07-25 12:02:47.373871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.297 [2024-07-25 12:02:47.375305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.297 [2024-07-25 12:02:47.375333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:01.297 pt2 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.297 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:01.556 malloc3 00:21:01.556 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:01.814 [2024-07-25 12:02:47.835265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:01.814 [2024-07-25 12:02:47.835307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.814 [2024-07-25 12:02:47.835324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24016b0 00:21:01.814 [2024-07-25 12:02:47.835335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.814 [2024-07-25 12:02:47.836687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.814 [2024-07-25 12:02:47.836714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:01.814 pt3 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.814 12:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:02.073 malloc4 00:21:02.073 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:02.332 [2024-07-25 12:02:48.292821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:02.332 [2024-07-25 12:02:48.292864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.332 [2024-07-25 12:02:48.292885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ff370 00:21:02.332 [2024-07-25 12:02:48.292896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.332 [2024-07-25 12:02:48.294251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.332 [2024-07-25 12:02:48.294280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:02.332 pt4 00:21:02.332 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:02.332 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:02.332 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:02.590 [2024-07-25 12:02:48.521449] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:02.590 [2024-07-25 12:02:48.522619] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.590 [2024-07-25 12:02:48.522671] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:02.590 [2024-07-25 12:02:48.522714] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:02.590 [2024-07-25 12:02:48.522877] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2260560 00:21:02.590 [2024-07-25 12:02:48.522887] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:02.590 [2024-07-25 12:02:48.523076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2400680 00:21:02.590 [2024-07-25 12:02:48.523231] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2260560 00:21:02.590 [2024-07-25 12:02:48.523241] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2260560 00:21:02.590 [2024-07-25 12:02:48.523334] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.590 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.849 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.849 "name": "raid_bdev1", 00:21:02.849 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:02.849 "strip_size_kb": 0, 00:21:02.849 "state": "online", 00:21:02.849 "raid_level": "raid1", 00:21:02.849 "superblock": true, 00:21:02.849 "num_base_bdevs": 4, 00:21:02.849 "num_base_bdevs_discovered": 4, 00:21:02.849 "num_base_bdevs_operational": 4, 00:21:02.849 "base_bdevs_list": [ 00:21:02.849 { 00:21:02.849 "name": "pt1", 00:21:02.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.849 "is_configured": true, 00:21:02.849 "data_offset": 2048, 00:21:02.849 "data_size": 63488 00:21:02.849 }, 00:21:02.849 { 00:21:02.849 "name": "pt2", 00:21:02.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.849 "is_configured": true, 00:21:02.849 "data_offset": 2048, 00:21:02.849 "data_size": 63488 00:21:02.849 }, 00:21:02.849 { 00:21:02.849 "name": "pt3", 00:21:02.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.849 "is_configured": true, 00:21:02.849 "data_offset": 2048, 00:21:02.849 "data_size": 63488 00:21:02.849 }, 00:21:02.849 { 00:21:02.849 "name": "pt4", 00:21:02.849 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:02.849 "is_configured": true, 00:21:02.849 "data_offset": 2048, 00:21:02.849 "data_size": 63488 00:21:02.849 } 00:21:02.849 ] 00:21:02.849 }' 00:21:02.849 12:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.849 12:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:03.414 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:03.672 [2024-07-25 12:02:49.592506] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.672 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:03.672 "name": "raid_bdev1", 00:21:03.672 "aliases": [ 00:21:03.672 "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76" 00:21:03.672 ], 00:21:03.672 "product_name": "Raid Volume", 00:21:03.672 "block_size": 512, 00:21:03.672 "num_blocks": 63488, 00:21:03.672 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:03.672 "assigned_rate_limits": { 00:21:03.672 "rw_ios_per_sec": 0, 00:21:03.672 "rw_mbytes_per_sec": 0, 00:21:03.672 "r_mbytes_per_sec": 0, 00:21:03.672 "w_mbytes_per_sec": 0 00:21:03.672 }, 00:21:03.672 "claimed": false, 00:21:03.672 "zoned": false, 00:21:03.672 "supported_io_types": { 00:21:03.672 "read": true, 00:21:03.672 "write": true, 00:21:03.672 "unmap": false, 00:21:03.672 "flush": false, 00:21:03.672 "reset": true, 00:21:03.672 "nvme_admin": false, 00:21:03.672 "nvme_io": false, 00:21:03.672 "nvme_io_md": false, 00:21:03.672 "write_zeroes": true, 00:21:03.672 "zcopy": false, 00:21:03.672 "get_zone_info": false, 00:21:03.672 "zone_management": false, 00:21:03.672 "zone_append": false, 00:21:03.672 "compare": false, 00:21:03.672 "compare_and_write": false, 00:21:03.672 "abort": false, 00:21:03.672 "seek_hole": false, 00:21:03.672 "seek_data": false, 00:21:03.672 "copy": false, 00:21:03.672 "nvme_iov_md": false 00:21:03.672 }, 00:21:03.672 "memory_domains": [ 00:21:03.672 { 00:21:03.672 "dma_device_id": "system", 00:21:03.672 "dma_device_type": 1 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.672 "dma_device_type": 2 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "system", 00:21:03.672 "dma_device_type": 1 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.672 "dma_device_type": 2 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "system", 00:21:03.672 "dma_device_type": 1 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.672 "dma_device_type": 2 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "system", 00:21:03.672 "dma_device_type": 1 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.672 "dma_device_type": 2 00:21:03.672 } 00:21:03.672 ], 00:21:03.672 "driver_specific": { 00:21:03.672 "raid": { 00:21:03.672 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:03.672 "strip_size_kb": 0, 00:21:03.672 "state": "online", 00:21:03.672 "raid_level": "raid1", 00:21:03.672 "superblock": true, 00:21:03.672 "num_base_bdevs": 4, 00:21:03.672 "num_base_bdevs_discovered": 4, 00:21:03.672 "num_base_bdevs_operational": 4, 00:21:03.672 "base_bdevs_list": [ 00:21:03.672 { 00:21:03.672 "name": "pt1", 00:21:03.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.672 "is_configured": true, 00:21:03.672 "data_offset": 2048, 00:21:03.672 "data_size": 63488 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "name": "pt2", 00:21:03.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.672 "is_configured": true, 00:21:03.672 "data_offset": 2048, 00:21:03.672 "data_size": 63488 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "name": "pt3", 00:21:03.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.672 "is_configured": true, 00:21:03.672 "data_offset": 2048, 00:21:03.672 "data_size": 63488 00:21:03.672 }, 00:21:03.672 { 00:21:03.672 "name": "pt4", 00:21:03.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:03.672 "is_configured": true, 00:21:03.672 "data_offset": 2048, 00:21:03.672 "data_size": 63488 00:21:03.672 } 00:21:03.672 ] 00:21:03.672 } 00:21:03.672 } 00:21:03.672 }' 00:21:03.672 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.672 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:03.672 pt2 00:21:03.672 pt3 00:21:03.672 pt4' 00:21:03.672 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.672 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:03.672 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.930 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.930 "name": "pt1", 00:21:03.930 "aliases": [ 00:21:03.930 "00000000-0000-0000-0000-000000000001" 00:21:03.930 ], 00:21:03.930 "product_name": "passthru", 00:21:03.930 "block_size": 512, 00:21:03.930 "num_blocks": 65536, 00:21:03.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.930 "assigned_rate_limits": { 00:21:03.930 "rw_ios_per_sec": 0, 00:21:03.930 "rw_mbytes_per_sec": 0, 00:21:03.930 "r_mbytes_per_sec": 0, 00:21:03.930 "w_mbytes_per_sec": 0 00:21:03.930 }, 00:21:03.930 "claimed": true, 00:21:03.930 "claim_type": "exclusive_write", 00:21:03.930 "zoned": false, 00:21:03.930 "supported_io_types": { 00:21:03.930 "read": true, 00:21:03.930 "write": true, 00:21:03.930 "unmap": true, 00:21:03.930 "flush": true, 00:21:03.930 "reset": true, 00:21:03.930 "nvme_admin": false, 00:21:03.930 "nvme_io": false, 00:21:03.930 "nvme_io_md": false, 00:21:03.930 "write_zeroes": true, 00:21:03.930 "zcopy": true, 00:21:03.930 "get_zone_info": false, 00:21:03.930 "zone_management": false, 00:21:03.930 "zone_append": false, 00:21:03.930 "compare": false, 00:21:03.930 "compare_and_write": false, 00:21:03.930 "abort": true, 00:21:03.930 "seek_hole": false, 00:21:03.930 "seek_data": false, 00:21:03.930 "copy": true, 00:21:03.930 "nvme_iov_md": false 00:21:03.930 }, 00:21:03.930 "memory_domains": [ 00:21:03.930 { 00:21:03.930 "dma_device_id": "system", 00:21:03.930 "dma_device_type": 1 00:21:03.930 }, 00:21:03.930 { 00:21:03.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.930 "dma_device_type": 2 00:21:03.930 } 00:21:03.930 ], 00:21:03.930 "driver_specific": { 00:21:03.930 "passthru": { 00:21:03.930 "name": "pt1", 00:21:03.930 "base_bdev_name": "malloc1" 00:21:03.930 } 00:21:03.930 } 00:21:03.930 }' 00:21:03.930 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.930 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.930 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.930 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.930 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.930 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:04.188 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.445 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.445 "name": "pt2", 00:21:04.445 "aliases": [ 00:21:04.445 "00000000-0000-0000-0000-000000000002" 00:21:04.445 ], 00:21:04.445 "product_name": "passthru", 00:21:04.445 "block_size": 512, 00:21:04.445 "num_blocks": 65536, 00:21:04.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.445 "assigned_rate_limits": { 00:21:04.445 "rw_ios_per_sec": 0, 00:21:04.445 "rw_mbytes_per_sec": 0, 00:21:04.445 "r_mbytes_per_sec": 0, 00:21:04.445 "w_mbytes_per_sec": 0 00:21:04.445 }, 00:21:04.445 "claimed": true, 00:21:04.445 "claim_type": "exclusive_write", 00:21:04.445 "zoned": false, 00:21:04.445 "supported_io_types": { 00:21:04.445 "read": true, 00:21:04.445 "write": true, 00:21:04.445 "unmap": true, 00:21:04.445 "flush": true, 00:21:04.445 "reset": true, 00:21:04.445 "nvme_admin": false, 00:21:04.445 "nvme_io": false, 00:21:04.445 "nvme_io_md": false, 00:21:04.445 "write_zeroes": true, 00:21:04.445 "zcopy": true, 00:21:04.445 "get_zone_info": false, 00:21:04.445 "zone_management": false, 00:21:04.445 "zone_append": false, 00:21:04.445 "compare": false, 00:21:04.445 "compare_and_write": false, 00:21:04.445 "abort": true, 00:21:04.445 "seek_hole": false, 00:21:04.445 "seek_data": false, 00:21:04.445 "copy": true, 00:21:04.445 "nvme_iov_md": false 00:21:04.445 }, 00:21:04.445 "memory_domains": [ 00:21:04.445 { 00:21:04.445 "dma_device_id": "system", 00:21:04.445 "dma_device_type": 1 00:21:04.445 }, 00:21:04.445 { 00:21:04.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.445 "dma_device_type": 2 00:21:04.445 } 00:21:04.445 ], 00:21:04.445 "driver_specific": { 00:21:04.445 "passthru": { 00:21:04.445 "name": "pt2", 00:21:04.445 "base_bdev_name": "malloc2" 00:21:04.445 } 00:21:04.445 } 00:21:04.445 }' 00:21:04.445 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.445 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.445 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.445 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.445 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:04.703 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.961 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.961 "name": "pt3", 00:21:04.961 "aliases": [ 00:21:04.961 "00000000-0000-0000-0000-000000000003" 00:21:04.961 ], 00:21:04.961 "product_name": "passthru", 00:21:04.961 "block_size": 512, 00:21:04.961 "num_blocks": 65536, 00:21:04.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.961 "assigned_rate_limits": { 00:21:04.961 "rw_ios_per_sec": 0, 00:21:04.961 "rw_mbytes_per_sec": 0, 00:21:04.961 "r_mbytes_per_sec": 0, 00:21:04.961 "w_mbytes_per_sec": 0 00:21:04.961 }, 00:21:04.961 "claimed": true, 00:21:04.961 "claim_type": "exclusive_write", 00:21:04.961 "zoned": false, 00:21:04.961 "supported_io_types": { 00:21:04.961 "read": true, 00:21:04.961 "write": true, 00:21:04.961 "unmap": true, 00:21:04.961 "flush": true, 00:21:04.961 "reset": true, 00:21:04.961 "nvme_admin": false, 00:21:04.961 "nvme_io": false, 00:21:04.961 "nvme_io_md": false, 00:21:04.961 "write_zeroes": true, 00:21:04.961 "zcopy": true, 00:21:04.961 "get_zone_info": false, 00:21:04.961 "zone_management": false, 00:21:04.961 "zone_append": false, 00:21:04.961 "compare": false, 00:21:04.962 "compare_and_write": false, 00:21:04.962 "abort": true, 00:21:04.962 "seek_hole": false, 00:21:04.962 "seek_data": false, 00:21:04.962 "copy": true, 00:21:04.962 "nvme_iov_md": false 00:21:04.962 }, 00:21:04.962 "memory_domains": [ 00:21:04.962 { 00:21:04.962 "dma_device_id": "system", 00:21:04.962 "dma_device_type": 1 00:21:04.962 }, 00:21:04.962 { 00:21:04.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.962 "dma_device_type": 2 00:21:04.962 } 00:21:04.962 ], 00:21:04.962 "driver_specific": { 00:21:04.962 "passthru": { 00:21:04.962 "name": "pt3", 00:21:04.962 "base_bdev_name": "malloc3" 00:21:04.962 } 00:21:04.962 } 00:21:04.962 }' 00:21:04.962 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.962 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.962 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.962 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:05.220 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.478 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.478 "name": "pt4", 00:21:05.478 "aliases": [ 00:21:05.478 "00000000-0000-0000-0000-000000000004" 00:21:05.478 ], 00:21:05.478 "product_name": "passthru", 00:21:05.478 "block_size": 512, 00:21:05.478 "num_blocks": 65536, 00:21:05.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:05.478 "assigned_rate_limits": { 00:21:05.478 "rw_ios_per_sec": 0, 00:21:05.478 "rw_mbytes_per_sec": 0, 00:21:05.478 "r_mbytes_per_sec": 0, 00:21:05.478 "w_mbytes_per_sec": 0 00:21:05.478 }, 00:21:05.478 "claimed": true, 00:21:05.478 "claim_type": "exclusive_write", 00:21:05.478 "zoned": false, 00:21:05.478 "supported_io_types": { 00:21:05.478 "read": true, 00:21:05.478 "write": true, 00:21:05.478 "unmap": true, 00:21:05.478 "flush": true, 00:21:05.478 "reset": true, 00:21:05.478 "nvme_admin": false, 00:21:05.478 "nvme_io": false, 00:21:05.478 "nvme_io_md": false, 00:21:05.478 "write_zeroes": true, 00:21:05.478 "zcopy": true, 00:21:05.478 "get_zone_info": false, 00:21:05.478 "zone_management": false, 00:21:05.478 "zone_append": false, 00:21:05.478 "compare": false, 00:21:05.478 "compare_and_write": false, 00:21:05.478 "abort": true, 00:21:05.478 "seek_hole": false, 00:21:05.478 "seek_data": false, 00:21:05.478 "copy": true, 00:21:05.478 "nvme_iov_md": false 00:21:05.478 }, 00:21:05.478 "memory_domains": [ 00:21:05.478 { 00:21:05.478 "dma_device_id": "system", 00:21:05.478 "dma_device_type": 1 00:21:05.478 }, 00:21:05.478 { 00:21:05.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.478 "dma_device_type": 2 00:21:05.478 } 00:21:05.478 ], 00:21:05.478 "driver_specific": { 00:21:05.478 "passthru": { 00:21:05.478 "name": "pt4", 00:21:05.478 "base_bdev_name": "malloc4" 00:21:05.478 } 00:21:05.478 } 00:21:05.478 }' 00:21:05.478 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.478 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:05.736 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:05.995 [2024-07-25 12:02:52.026900] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.995 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8b6d5127-8d39-4b3f-8c1b-59abf17f4c76 00:21:05.995 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8b6d5127-8d39-4b3f-8c1b-59abf17f4c76 ']' 00:21:05.995 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.253 [2024-07-25 12:02:52.251196] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.253 [2024-07-25 12:02:52.251211] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.253 [2024-07-25 12:02:52.251255] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.253 [2024-07-25 12:02:52.251330] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.253 [2024-07-25 12:02:52.251345] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2260560 name raid_bdev1, state offline 00:21:06.253 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.253 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:06.511 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:06.511 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:06.511 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.511 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:06.769 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.769 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.028 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.028 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:07.286 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.286 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:07.286 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:07.286 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:07.543 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:07.543 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:07.543 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:07.544 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:07.802 [2024-07-25 12:02:53.819272] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:07.802 [2024-07-25 12:02:53.820526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:07.802 [2024-07-25 12:02:53.820568] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:07.802 [2024-07-25 12:02:53.820598] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:07.802 [2024-07-25 12:02:53.820643] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:07.802 [2024-07-25 12:02:53.820680] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:07.802 [2024-07-25 12:02:53.820701] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:07.802 [2024-07-25 12:02:53.820721] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:07.802 [2024-07-25 12:02:53.820737] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.802 [2024-07-25 12:02:53.820746] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x240ad50 name raid_bdev1, state configuring 00:21:07.802 request: 00:21:07.802 { 00:21:07.802 "name": "raid_bdev1", 00:21:07.802 "raid_level": "raid1", 00:21:07.802 "base_bdevs": [ 00:21:07.802 "malloc1", 00:21:07.802 "malloc2", 00:21:07.802 "malloc3", 00:21:07.802 "malloc4" 00:21:07.802 ], 00:21:07.802 "superblock": false, 00:21:07.802 "method": "bdev_raid_create", 00:21:07.802 "req_id": 1 00:21:07.802 } 00:21:07.802 Got JSON-RPC error response 00:21:07.802 response: 00:21:07.802 { 00:21:07.802 "code": -17, 00:21:07.802 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:07.802 } 00:21:07.802 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:07.802 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.802 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.802 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.802 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.802 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:08.060 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:08.060 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:08.060 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.318 [2024-07-25 12:02:54.276410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.318 [2024-07-25 12:02:54.276445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.318 [2024-07-25 12:02:54.276460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x240a3f0 00:21:08.318 [2024-07-25 12:02:54.276472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.318 [2024-07-25 12:02:54.277937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.318 [2024-07-25 12:02:54.277964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.319 [2024-07-25 12:02:54.278024] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.319 [2024-07-25 12:02:54.278050] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.319 pt1 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.319 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.577 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.577 "name": "raid_bdev1", 00:21:08.577 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:08.577 "strip_size_kb": 0, 00:21:08.577 "state": "configuring", 00:21:08.577 "raid_level": "raid1", 00:21:08.577 "superblock": true, 00:21:08.577 "num_base_bdevs": 4, 00:21:08.577 "num_base_bdevs_discovered": 1, 00:21:08.577 "num_base_bdevs_operational": 4, 00:21:08.577 "base_bdevs_list": [ 00:21:08.577 { 00:21:08.577 "name": "pt1", 00:21:08.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:08.577 "is_configured": true, 00:21:08.577 "data_offset": 2048, 00:21:08.577 "data_size": 63488 00:21:08.577 }, 00:21:08.577 { 00:21:08.577 "name": null, 00:21:08.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.577 "is_configured": false, 00:21:08.577 "data_offset": 2048, 00:21:08.577 "data_size": 63488 00:21:08.577 }, 00:21:08.577 { 00:21:08.577 "name": null, 00:21:08.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:08.577 "is_configured": false, 00:21:08.577 "data_offset": 2048, 00:21:08.577 "data_size": 63488 00:21:08.577 }, 00:21:08.577 { 00:21:08.577 "name": null, 00:21:08.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:08.577 "is_configured": false, 00:21:08.577 "data_offset": 2048, 00:21:08.577 "data_size": 63488 00:21:08.577 } 00:21:08.577 ] 00:21:08.577 }' 00:21:08.577 12:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.577 12:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.143 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:21:09.143 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.401 [2024-07-25 12:02:55.311328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.402 [2024-07-25 12:02:55.311377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.402 [2024-07-25 12:02:55.311398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2267520 00:21:09.402 [2024-07-25 12:02:55.311410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.402 [2024-07-25 12:02:55.311734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.402 [2024-07-25 12:02:55.311751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.402 [2024-07-25 12:02:55.311810] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:09.402 [2024-07-25 12:02:55.311830] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.402 pt2 00:21:09.402 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:09.660 [2024-07-25 12:02:55.535934] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.660 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.919 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.919 "name": "raid_bdev1", 00:21:09.919 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:09.919 "strip_size_kb": 0, 00:21:09.919 "state": "configuring", 00:21:09.919 "raid_level": "raid1", 00:21:09.919 "superblock": true, 00:21:09.919 "num_base_bdevs": 4, 00:21:09.919 "num_base_bdevs_discovered": 1, 00:21:09.919 "num_base_bdevs_operational": 4, 00:21:09.919 "base_bdevs_list": [ 00:21:09.919 { 00:21:09.919 "name": "pt1", 00:21:09.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:09.919 "is_configured": true, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 }, 00:21:09.919 { 00:21:09.919 "name": null, 00:21:09.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.919 "is_configured": false, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 }, 00:21:09.919 { 00:21:09.919 "name": null, 00:21:09.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:09.919 "is_configured": false, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 }, 00:21:09.919 { 00:21:09.919 "name": null, 00:21:09.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:09.919 "is_configured": false, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 } 00:21:09.919 ] 00:21:09.919 }' 00:21:09.919 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.919 12:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.486 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:10.486 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.486 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:10.486 [2024-07-25 12:02:56.582681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:10.486 [2024-07-25 12:02:56.582729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.486 [2024-07-25 12:02:56.582750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2267750 00:21:10.486 [2024-07-25 12:02:56.582761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.486 [2024-07-25 12:02:56.583075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.486 [2024-07-25 12:02:56.583090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:10.486 [2024-07-25 12:02:56.583158] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:10.486 [2024-07-25 12:02:56.583178] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.486 pt2 00:21:10.486 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:10.486 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.486 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:10.745 [2024-07-25 12:02:56.811290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:10.745 [2024-07-25 12:02:56.811334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.745 [2024-07-25 12:02:56.811349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2260fa0 00:21:10.745 [2024-07-25 12:02:56.811360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.745 [2024-07-25 12:02:56.811670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.745 [2024-07-25 12:02:56.811690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:10.745 [2024-07-25 12:02:56.811750] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:10.745 [2024-07-25 12:02:56.811770] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:10.745 pt3 00:21:10.745 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:10.745 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.745 12:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:11.003 [2024-07-25 12:02:57.023839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:11.003 [2024-07-25 12:02:57.023864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.003 [2024-07-25 12:02:57.023879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2261b40 00:21:11.003 [2024-07-25 12:02:57.023890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.003 [2024-07-25 12:02:57.024165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.003 [2024-07-25 12:02:57.024180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:11.003 [2024-07-25 12:02:57.024229] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:11.003 [2024-07-25 12:02:57.024247] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:11.003 [2024-07-25 12:02:57.024359] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x225efc0 00:21:11.003 [2024-07-25 12:02:57.024369] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:11.003 [2024-07-25 12:02:57.024523] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225e8b0 00:21:11.003 [2024-07-25 12:02:57.024648] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x225efc0 00:21:11.003 [2024-07-25 12:02:57.024657] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x225efc0 00:21:11.003 [2024-07-25 12:02:57.024744] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.003 pt4 00:21:11.003 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:11.003 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:11.003 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.004 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.262 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.262 "name": "raid_bdev1", 00:21:11.262 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:11.262 "strip_size_kb": 0, 00:21:11.262 "state": "online", 00:21:11.262 "raid_level": "raid1", 00:21:11.262 "superblock": true, 00:21:11.262 "num_base_bdevs": 4, 00:21:11.262 "num_base_bdevs_discovered": 4, 00:21:11.262 "num_base_bdevs_operational": 4, 00:21:11.262 "base_bdevs_list": [ 00:21:11.262 { 00:21:11.262 "name": "pt1", 00:21:11.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.262 "is_configured": true, 00:21:11.262 "data_offset": 2048, 00:21:11.262 "data_size": 63488 00:21:11.262 }, 00:21:11.262 { 00:21:11.262 "name": "pt2", 00:21:11.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.262 "is_configured": true, 00:21:11.262 "data_offset": 2048, 00:21:11.262 "data_size": 63488 00:21:11.262 }, 00:21:11.262 { 00:21:11.262 "name": "pt3", 00:21:11.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:11.262 "is_configured": true, 00:21:11.262 "data_offset": 2048, 00:21:11.262 "data_size": 63488 00:21:11.262 }, 00:21:11.262 { 00:21:11.262 "name": "pt4", 00:21:11.262 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:11.262 "is_configured": true, 00:21:11.262 "data_offset": 2048, 00:21:11.262 "data_size": 63488 00:21:11.262 } 00:21:11.262 ] 00:21:11.262 }' 00:21:11.262 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.262 12:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.837 12:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:12.139 [2024-07-25 12:02:58.038845] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.139 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:12.139 "name": "raid_bdev1", 00:21:12.139 "aliases": [ 00:21:12.139 "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76" 00:21:12.139 ], 00:21:12.139 "product_name": "Raid Volume", 00:21:12.139 "block_size": 512, 00:21:12.139 "num_blocks": 63488, 00:21:12.139 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:12.139 "assigned_rate_limits": { 00:21:12.139 "rw_ios_per_sec": 0, 00:21:12.139 "rw_mbytes_per_sec": 0, 00:21:12.139 "r_mbytes_per_sec": 0, 00:21:12.139 "w_mbytes_per_sec": 0 00:21:12.139 }, 00:21:12.139 "claimed": false, 00:21:12.139 "zoned": false, 00:21:12.139 "supported_io_types": { 00:21:12.139 "read": true, 00:21:12.139 "write": true, 00:21:12.139 "unmap": false, 00:21:12.139 "flush": false, 00:21:12.139 "reset": true, 00:21:12.139 "nvme_admin": false, 00:21:12.139 "nvme_io": false, 00:21:12.139 "nvme_io_md": false, 00:21:12.139 "write_zeroes": true, 00:21:12.139 "zcopy": false, 00:21:12.139 "get_zone_info": false, 00:21:12.139 "zone_management": false, 00:21:12.139 "zone_append": false, 00:21:12.139 "compare": false, 00:21:12.139 "compare_and_write": false, 00:21:12.139 "abort": false, 00:21:12.139 "seek_hole": false, 00:21:12.139 "seek_data": false, 00:21:12.139 "copy": false, 00:21:12.139 "nvme_iov_md": false 00:21:12.139 }, 00:21:12.139 "memory_domains": [ 00:21:12.139 { 00:21:12.139 "dma_device_id": "system", 00:21:12.139 "dma_device_type": 1 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.139 "dma_device_type": 2 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "system", 00:21:12.139 "dma_device_type": 1 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.139 "dma_device_type": 2 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "system", 00:21:12.139 "dma_device_type": 1 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.139 "dma_device_type": 2 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "system", 00:21:12.139 "dma_device_type": 1 00:21:12.139 }, 00:21:12.139 { 00:21:12.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.139 "dma_device_type": 2 00:21:12.139 } 00:21:12.139 ], 00:21:12.139 "driver_specific": { 00:21:12.139 "raid": { 00:21:12.140 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:12.140 "strip_size_kb": 0, 00:21:12.140 "state": "online", 00:21:12.140 "raid_level": "raid1", 00:21:12.140 "superblock": true, 00:21:12.140 "num_base_bdevs": 4, 00:21:12.140 "num_base_bdevs_discovered": 4, 00:21:12.140 "num_base_bdevs_operational": 4, 00:21:12.140 "base_bdevs_list": [ 00:21:12.140 { 00:21:12.140 "name": "pt1", 00:21:12.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:12.140 "is_configured": true, 00:21:12.140 "data_offset": 2048, 00:21:12.140 "data_size": 63488 00:21:12.140 }, 00:21:12.140 { 00:21:12.140 "name": "pt2", 00:21:12.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:12.140 "is_configured": true, 00:21:12.140 "data_offset": 2048, 00:21:12.140 "data_size": 63488 00:21:12.140 }, 00:21:12.140 { 00:21:12.140 "name": "pt3", 00:21:12.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:12.140 "is_configured": true, 00:21:12.140 "data_offset": 2048, 00:21:12.140 "data_size": 63488 00:21:12.140 }, 00:21:12.140 { 00:21:12.140 "name": "pt4", 00:21:12.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:12.140 "is_configured": true, 00:21:12.140 "data_offset": 2048, 00:21:12.140 "data_size": 63488 00:21:12.140 } 00:21:12.140 ] 00:21:12.140 } 00:21:12.140 } 00:21:12.140 }' 00:21:12.140 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:12.140 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:12.140 pt2 00:21:12.140 pt3 00:21:12.140 pt4' 00:21:12.140 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.140 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:12.140 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.400 "name": "pt1", 00:21:12.400 "aliases": [ 00:21:12.400 "00000000-0000-0000-0000-000000000001" 00:21:12.400 ], 00:21:12.400 "product_name": "passthru", 00:21:12.400 "block_size": 512, 00:21:12.400 "num_blocks": 65536, 00:21:12.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:12.400 "assigned_rate_limits": { 00:21:12.400 "rw_ios_per_sec": 0, 00:21:12.400 "rw_mbytes_per_sec": 0, 00:21:12.400 "r_mbytes_per_sec": 0, 00:21:12.400 "w_mbytes_per_sec": 0 00:21:12.400 }, 00:21:12.400 "claimed": true, 00:21:12.400 "claim_type": "exclusive_write", 00:21:12.400 "zoned": false, 00:21:12.400 "supported_io_types": { 00:21:12.400 "read": true, 00:21:12.400 "write": true, 00:21:12.400 "unmap": true, 00:21:12.400 "flush": true, 00:21:12.400 "reset": true, 00:21:12.400 "nvme_admin": false, 00:21:12.400 "nvme_io": false, 00:21:12.400 "nvme_io_md": false, 00:21:12.400 "write_zeroes": true, 00:21:12.400 "zcopy": true, 00:21:12.400 "get_zone_info": false, 00:21:12.400 "zone_management": false, 00:21:12.400 "zone_append": false, 00:21:12.400 "compare": false, 00:21:12.400 "compare_and_write": false, 00:21:12.400 "abort": true, 00:21:12.400 "seek_hole": false, 00:21:12.400 "seek_data": false, 00:21:12.400 "copy": true, 00:21:12.400 "nvme_iov_md": false 00:21:12.400 }, 00:21:12.400 "memory_domains": [ 00:21:12.400 { 00:21:12.400 "dma_device_id": "system", 00:21:12.400 "dma_device_type": 1 00:21:12.400 }, 00:21:12.400 { 00:21:12.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.400 "dma_device_type": 2 00:21:12.400 } 00:21:12.400 ], 00:21:12.400 "driver_specific": { 00:21:12.400 "passthru": { 00:21:12.400 "name": "pt1", 00:21:12.400 "base_bdev_name": "malloc1" 00:21:12.400 } 00:21:12.400 } 00:21:12.400 }' 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.400 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:12.658 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.915 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.915 "name": "pt2", 00:21:12.915 "aliases": [ 00:21:12.915 "00000000-0000-0000-0000-000000000002" 00:21:12.916 ], 00:21:12.916 "product_name": "passthru", 00:21:12.916 "block_size": 512, 00:21:12.916 "num_blocks": 65536, 00:21:12.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:12.916 "assigned_rate_limits": { 00:21:12.916 "rw_ios_per_sec": 0, 00:21:12.916 "rw_mbytes_per_sec": 0, 00:21:12.916 "r_mbytes_per_sec": 0, 00:21:12.916 "w_mbytes_per_sec": 0 00:21:12.916 }, 00:21:12.916 "claimed": true, 00:21:12.916 "claim_type": "exclusive_write", 00:21:12.916 "zoned": false, 00:21:12.916 "supported_io_types": { 00:21:12.916 "read": true, 00:21:12.916 "write": true, 00:21:12.916 "unmap": true, 00:21:12.916 "flush": true, 00:21:12.916 "reset": true, 00:21:12.916 "nvme_admin": false, 00:21:12.916 "nvme_io": false, 00:21:12.916 "nvme_io_md": false, 00:21:12.916 "write_zeroes": true, 00:21:12.916 "zcopy": true, 00:21:12.916 "get_zone_info": false, 00:21:12.916 "zone_management": false, 00:21:12.916 "zone_append": false, 00:21:12.916 "compare": false, 00:21:12.916 "compare_and_write": false, 00:21:12.916 "abort": true, 00:21:12.916 "seek_hole": false, 00:21:12.916 "seek_data": false, 00:21:12.916 "copy": true, 00:21:12.916 "nvme_iov_md": false 00:21:12.916 }, 00:21:12.916 "memory_domains": [ 00:21:12.916 { 00:21:12.916 "dma_device_id": "system", 00:21:12.916 "dma_device_type": 1 00:21:12.916 }, 00:21:12.916 { 00:21:12.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.916 "dma_device_type": 2 00:21:12.916 } 00:21:12.916 ], 00:21:12.916 "driver_specific": { 00:21:12.916 "passthru": { 00:21:12.916 "name": "pt2", 00:21:12.916 "base_bdev_name": "malloc2" 00:21:12.916 } 00:21:12.916 } 00:21:12.916 }' 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.916 12:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.916 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:13.174 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.432 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.432 "name": "pt3", 00:21:13.432 "aliases": [ 00:21:13.432 "00000000-0000-0000-0000-000000000003" 00:21:13.432 ], 00:21:13.432 "product_name": "passthru", 00:21:13.432 "block_size": 512, 00:21:13.432 "num_blocks": 65536, 00:21:13.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.432 "assigned_rate_limits": { 00:21:13.432 "rw_ios_per_sec": 0, 00:21:13.432 "rw_mbytes_per_sec": 0, 00:21:13.432 "r_mbytes_per_sec": 0, 00:21:13.432 "w_mbytes_per_sec": 0 00:21:13.432 }, 00:21:13.432 "claimed": true, 00:21:13.432 "claim_type": "exclusive_write", 00:21:13.432 "zoned": false, 00:21:13.432 "supported_io_types": { 00:21:13.432 "read": true, 00:21:13.432 "write": true, 00:21:13.432 "unmap": true, 00:21:13.432 "flush": true, 00:21:13.432 "reset": true, 00:21:13.432 "nvme_admin": false, 00:21:13.432 "nvme_io": false, 00:21:13.432 "nvme_io_md": false, 00:21:13.432 "write_zeroes": true, 00:21:13.432 "zcopy": true, 00:21:13.432 "get_zone_info": false, 00:21:13.432 "zone_management": false, 00:21:13.432 "zone_append": false, 00:21:13.432 "compare": false, 00:21:13.432 "compare_and_write": false, 00:21:13.432 "abort": true, 00:21:13.432 "seek_hole": false, 00:21:13.432 "seek_data": false, 00:21:13.432 "copy": true, 00:21:13.432 "nvme_iov_md": false 00:21:13.432 }, 00:21:13.432 "memory_domains": [ 00:21:13.432 { 00:21:13.432 "dma_device_id": "system", 00:21:13.432 "dma_device_type": 1 00:21:13.432 }, 00:21:13.432 { 00:21:13.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.432 "dma_device_type": 2 00:21:13.432 } 00:21:13.432 ], 00:21:13.432 "driver_specific": { 00:21:13.432 "passthru": { 00:21:13.432 "name": "pt3", 00:21:13.432 "base_bdev_name": "malloc3" 00:21:13.432 } 00:21:13.432 } 00:21:13.432 }' 00:21:13.432 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.432 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.432 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.432 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.432 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:13.691 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.948 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.948 "name": "pt4", 00:21:13.948 "aliases": [ 00:21:13.948 "00000000-0000-0000-0000-000000000004" 00:21:13.948 ], 00:21:13.948 "product_name": "passthru", 00:21:13.948 "block_size": 512, 00:21:13.948 "num_blocks": 65536, 00:21:13.948 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:13.948 "assigned_rate_limits": { 00:21:13.948 "rw_ios_per_sec": 0, 00:21:13.948 "rw_mbytes_per_sec": 0, 00:21:13.948 "r_mbytes_per_sec": 0, 00:21:13.948 "w_mbytes_per_sec": 0 00:21:13.948 }, 00:21:13.948 "claimed": true, 00:21:13.948 "claim_type": "exclusive_write", 00:21:13.948 "zoned": false, 00:21:13.948 "supported_io_types": { 00:21:13.948 "read": true, 00:21:13.948 "write": true, 00:21:13.948 "unmap": true, 00:21:13.948 "flush": true, 00:21:13.948 "reset": true, 00:21:13.948 "nvme_admin": false, 00:21:13.948 "nvme_io": false, 00:21:13.948 "nvme_io_md": false, 00:21:13.948 "write_zeroes": true, 00:21:13.948 "zcopy": true, 00:21:13.948 "get_zone_info": false, 00:21:13.948 "zone_management": false, 00:21:13.948 "zone_append": false, 00:21:13.948 "compare": false, 00:21:13.948 "compare_and_write": false, 00:21:13.948 "abort": true, 00:21:13.948 "seek_hole": false, 00:21:13.948 "seek_data": false, 00:21:13.948 "copy": true, 00:21:13.948 "nvme_iov_md": false 00:21:13.948 }, 00:21:13.948 "memory_domains": [ 00:21:13.948 { 00:21:13.948 "dma_device_id": "system", 00:21:13.948 "dma_device_type": 1 00:21:13.948 }, 00:21:13.948 { 00:21:13.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.948 "dma_device_type": 2 00:21:13.948 } 00:21:13.948 ], 00:21:13.948 "driver_specific": { 00:21:13.948 "passthru": { 00:21:13.948 "name": "pt4", 00:21:13.948 "base_bdev_name": "malloc4" 00:21:13.948 } 00:21:13.948 } 00:21:13.948 }' 00:21:13.948 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.948 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.948 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.948 12:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.948 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.206 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:14.464 [2024-07-25 12:03:00.505348] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.464 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8b6d5127-8d39-4b3f-8c1b-59abf17f4c76 '!=' 8b6d5127-8d39-4b3f-8c1b-59abf17f4c76 ']' 00:21:14.464 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:14.464 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:14.465 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:14.465 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:14.723 [2024-07-25 12:03:00.733694] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.723 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.981 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.981 "name": "raid_bdev1", 00:21:14.981 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:14.981 "strip_size_kb": 0, 00:21:14.981 "state": "online", 00:21:14.981 "raid_level": "raid1", 00:21:14.981 "superblock": true, 00:21:14.981 "num_base_bdevs": 4, 00:21:14.981 "num_base_bdevs_discovered": 3, 00:21:14.981 "num_base_bdevs_operational": 3, 00:21:14.981 "base_bdevs_list": [ 00:21:14.981 { 00:21:14.981 "name": null, 00:21:14.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.981 "is_configured": false, 00:21:14.981 "data_offset": 2048, 00:21:14.981 "data_size": 63488 00:21:14.981 }, 00:21:14.981 { 00:21:14.981 "name": "pt2", 00:21:14.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.981 "is_configured": true, 00:21:14.981 "data_offset": 2048, 00:21:14.981 "data_size": 63488 00:21:14.981 }, 00:21:14.981 { 00:21:14.981 "name": "pt3", 00:21:14.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:14.981 "is_configured": true, 00:21:14.981 "data_offset": 2048, 00:21:14.981 "data_size": 63488 00:21:14.981 }, 00:21:14.981 { 00:21:14.981 "name": "pt4", 00:21:14.981 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:14.981 "is_configured": true, 00:21:14.981 "data_offset": 2048, 00:21:14.981 "data_size": 63488 00:21:14.981 } 00:21:14.981 ] 00:21:14.981 }' 00:21:14.981 12:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.981 12:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.548 12:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:15.805 [2024-07-25 12:03:01.776452] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.805 [2024-07-25 12:03:01.776477] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.805 [2024-07-25 12:03:01.776529] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.805 [2024-07-25 12:03:01.776595] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.805 [2024-07-25 12:03:01.776607] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225efc0 name raid_bdev1, state offline 00:21:15.805 12:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.805 12:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:16.063 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:16.063 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:16.063 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:16.063 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:16.063 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:16.321 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:16.321 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:16.321 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:16.579 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.838 [2024-07-25 12:03:02.871274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.838 [2024-07-25 12:03:02.871323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.838 [2024-07-25 12:03:02.871339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2267750 00:21:16.838 [2024-07-25 12:03:02.871350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.838 [2024-07-25 12:03:02.872864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.838 [2024-07-25 12:03:02.872894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.838 [2024-07-25 12:03:02.872957] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:16.838 [2024-07-25 12:03:02.872985] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.838 pt2 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.838 12:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.097 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.097 "name": "raid_bdev1", 00:21:17.097 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:17.097 "strip_size_kb": 0, 00:21:17.097 "state": "configuring", 00:21:17.097 "raid_level": "raid1", 00:21:17.097 "superblock": true, 00:21:17.097 "num_base_bdevs": 4, 00:21:17.097 "num_base_bdevs_discovered": 1, 00:21:17.097 "num_base_bdevs_operational": 3, 00:21:17.097 "base_bdevs_list": [ 00:21:17.097 { 00:21:17.097 "name": null, 00:21:17.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.097 "is_configured": false, 00:21:17.097 "data_offset": 2048, 00:21:17.097 "data_size": 63488 00:21:17.097 }, 00:21:17.097 { 00:21:17.097 "name": "pt2", 00:21:17.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.097 "is_configured": true, 00:21:17.097 "data_offset": 2048, 00:21:17.097 "data_size": 63488 00:21:17.097 }, 00:21:17.097 { 00:21:17.097 "name": null, 00:21:17.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.097 "is_configured": false, 00:21:17.097 "data_offset": 2048, 00:21:17.097 "data_size": 63488 00:21:17.097 }, 00:21:17.097 { 00:21:17.097 "name": null, 00:21:17.097 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:17.097 "is_configured": false, 00:21:17.097 "data_offset": 2048, 00:21:17.097 "data_size": 63488 00:21:17.097 } 00:21:17.097 ] 00:21:17.097 }' 00:21:17.097 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.097 12:03:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.664 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:17.664 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:17.664 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:17.922 [2024-07-25 12:03:03.817773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:17.922 [2024-07-25 12:03:03.817817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.922 [2024-07-25 12:03:03.817833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225eaf0 00:21:17.922 [2024-07-25 12:03:03.817844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.922 [2024-07-25 12:03:03.818172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.922 [2024-07-25 12:03:03.818189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:17.922 [2024-07-25 12:03:03.818247] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:17.922 [2024-07-25 12:03:03.818266] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:17.922 pt3 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.922 12:03:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.181 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.181 "name": "raid_bdev1", 00:21:18.181 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:18.181 "strip_size_kb": 0, 00:21:18.181 "state": "configuring", 00:21:18.181 "raid_level": "raid1", 00:21:18.181 "superblock": true, 00:21:18.181 "num_base_bdevs": 4, 00:21:18.181 "num_base_bdevs_discovered": 2, 00:21:18.181 "num_base_bdevs_operational": 3, 00:21:18.181 "base_bdevs_list": [ 00:21:18.181 { 00:21:18.181 "name": null, 00:21:18.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.181 "is_configured": false, 00:21:18.181 "data_offset": 2048, 00:21:18.181 "data_size": 63488 00:21:18.181 }, 00:21:18.181 { 00:21:18.181 "name": "pt2", 00:21:18.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.181 "is_configured": true, 00:21:18.181 "data_offset": 2048, 00:21:18.181 "data_size": 63488 00:21:18.181 }, 00:21:18.181 { 00:21:18.181 "name": "pt3", 00:21:18.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.181 "is_configured": true, 00:21:18.181 "data_offset": 2048, 00:21:18.181 "data_size": 63488 00:21:18.181 }, 00:21:18.181 { 00:21:18.181 "name": null, 00:21:18.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:18.181 "is_configured": false, 00:21:18.181 "data_offset": 2048, 00:21:18.181 "data_size": 63488 00:21:18.181 } 00:21:18.181 ] 00:21:18.181 }' 00:21:18.181 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.181 12:03:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:18.748 [2024-07-25 12:03:04.780323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:18.748 [2024-07-25 12:03:04.780368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.748 [2024-07-25 12:03:04.780387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225dc20 00:21:18.748 [2024-07-25 12:03:04.780398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.748 [2024-07-25 12:03:04.780711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.748 [2024-07-25 12:03:04.780727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:18.748 [2024-07-25 12:03:04.780785] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:18.748 [2024-07-25 12:03:04.780805] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:18.748 [2024-07-25 12:03:04.780910] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x225e280 00:21:18.748 [2024-07-25 12:03:04.780920] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:18.748 [2024-07-25 12:03:04.781082] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2263580 00:21:18.748 [2024-07-25 12:03:04.781216] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x225e280 00:21:18.748 [2024-07-25 12:03:04.781226] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x225e280 00:21:18.748 [2024-07-25 12:03:04.781313] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.748 pt4 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.748 12:03:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.007 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.007 "name": "raid_bdev1", 00:21:19.007 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:19.007 "strip_size_kb": 0, 00:21:19.007 "state": "online", 00:21:19.007 "raid_level": "raid1", 00:21:19.007 "superblock": true, 00:21:19.007 "num_base_bdevs": 4, 00:21:19.007 "num_base_bdevs_discovered": 3, 00:21:19.007 "num_base_bdevs_operational": 3, 00:21:19.007 "base_bdevs_list": [ 00:21:19.007 { 00:21:19.007 "name": null, 00:21:19.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.007 "is_configured": false, 00:21:19.007 "data_offset": 2048, 00:21:19.007 "data_size": 63488 00:21:19.007 }, 00:21:19.007 { 00:21:19.007 "name": "pt2", 00:21:19.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.007 "is_configured": true, 00:21:19.007 "data_offset": 2048, 00:21:19.007 "data_size": 63488 00:21:19.007 }, 00:21:19.007 { 00:21:19.007 "name": "pt3", 00:21:19.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.007 "is_configured": true, 00:21:19.007 "data_offset": 2048, 00:21:19.007 "data_size": 63488 00:21:19.007 }, 00:21:19.007 { 00:21:19.007 "name": "pt4", 00:21:19.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:19.007 "is_configured": true, 00:21:19.007 "data_offset": 2048, 00:21:19.007 "data_size": 63488 00:21:19.007 } 00:21:19.007 ] 00:21:19.007 }' 00:21:19.007 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.007 12:03:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.573 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:19.831 [2024-07-25 12:03:05.746978] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.831 [2024-07-25 12:03:05.747003] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.831 [2024-07-25 12:03:05.747052] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.831 [2024-07-25 12:03:05.747115] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.831 [2024-07-25 12:03:05.747125] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225e280 name raid_bdev1, state offline 00:21:19.831 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.831 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:19.831 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:19.831 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:19.831 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:21:19.831 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:21:20.090 12:03:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:20.090 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:20.348 [2024-07-25 12:03:06.276349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:20.348 [2024-07-25 12:03:06.276387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.348 [2024-07-25 12:03:06.276402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2263400 00:21:20.348 [2024-07-25 12:03:06.276413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.348 [2024-07-25 12:03:06.277908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.348 [2024-07-25 12:03:06.277935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:20.348 [2024-07-25 12:03:06.277995] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:20.348 [2024-07-25 12:03:06.278021] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:20.348 [2024-07-25 12:03:06.278117] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:20.349 [2024-07-25 12:03:06.278129] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:20.349 [2024-07-25 12:03:06.278154] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24009f0 name raid_bdev1, state configuring 00:21:20.349 [2024-07-25 12:03:06.278177] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:20.349 [2024-07-25 12:03:06.278246] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:20.349 pt1 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.349 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.607 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.607 "name": "raid_bdev1", 00:21:20.607 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:20.607 "strip_size_kb": 0, 00:21:20.607 "state": "configuring", 00:21:20.607 "raid_level": "raid1", 00:21:20.607 "superblock": true, 00:21:20.607 "num_base_bdevs": 4, 00:21:20.607 "num_base_bdevs_discovered": 2, 00:21:20.607 "num_base_bdevs_operational": 3, 00:21:20.607 "base_bdevs_list": [ 00:21:20.607 { 00:21:20.607 "name": null, 00:21:20.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.607 "is_configured": false, 00:21:20.607 "data_offset": 2048, 00:21:20.607 "data_size": 63488 00:21:20.607 }, 00:21:20.607 { 00:21:20.607 "name": "pt2", 00:21:20.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.607 "is_configured": true, 00:21:20.607 "data_offset": 2048, 00:21:20.607 "data_size": 63488 00:21:20.607 }, 00:21:20.607 { 00:21:20.607 "name": "pt3", 00:21:20.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:20.607 "is_configured": true, 00:21:20.607 "data_offset": 2048, 00:21:20.607 "data_size": 63488 00:21:20.607 }, 00:21:20.607 { 00:21:20.607 "name": null, 00:21:20.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:20.607 "is_configured": false, 00:21:20.607 "data_offset": 2048, 00:21:20.607 "data_size": 63488 00:21:20.607 } 00:21:20.607 ] 00:21:20.607 }' 00:21:20.607 12:03:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.607 12:03:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.174 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:21.174 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:21.174 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:21:21.174 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:21.433 [2024-07-25 12:03:07.455481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:21.433 [2024-07-25 12:03:07.455530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.433 [2024-07-25 12:03:07.455552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2260050 00:21:21.433 [2024-07-25 12:03:07.455563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.433 [2024-07-25 12:03:07.455893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.433 [2024-07-25 12:03:07.455915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:21.433 [2024-07-25 12:03:07.455975] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:21.433 [2024-07-25 12:03:07.455994] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:21.433 [2024-07-25 12:03:07.456104] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x22664c0 00:21:21.433 [2024-07-25 12:03:07.456114] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:21.433 [2024-07-25 12:03:07.456288] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225f5a0 00:21:21.433 [2024-07-25 12:03:07.456415] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22664c0 00:21:21.433 [2024-07-25 12:03:07.456424] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22664c0 00:21:21.433 [2024-07-25 12:03:07.456514] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.433 pt4 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.433 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.692 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.692 "name": "raid_bdev1", 00:21:21.692 "uuid": "8b6d5127-8d39-4b3f-8c1b-59abf17f4c76", 00:21:21.692 "strip_size_kb": 0, 00:21:21.692 "state": "online", 00:21:21.692 "raid_level": "raid1", 00:21:21.692 "superblock": true, 00:21:21.692 "num_base_bdevs": 4, 00:21:21.692 "num_base_bdevs_discovered": 3, 00:21:21.692 "num_base_bdevs_operational": 3, 00:21:21.692 "base_bdevs_list": [ 00:21:21.692 { 00:21:21.692 "name": null, 00:21:21.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.692 "is_configured": false, 00:21:21.692 "data_offset": 2048, 00:21:21.692 "data_size": 63488 00:21:21.692 }, 00:21:21.692 { 00:21:21.692 "name": "pt2", 00:21:21.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.692 "is_configured": true, 00:21:21.692 "data_offset": 2048, 00:21:21.692 "data_size": 63488 00:21:21.692 }, 00:21:21.692 { 00:21:21.692 "name": "pt3", 00:21:21.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:21.692 "is_configured": true, 00:21:21.692 "data_offset": 2048, 00:21:21.692 "data_size": 63488 00:21:21.692 }, 00:21:21.692 { 00:21:21.692 "name": "pt4", 00:21:21.692 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:21.692 "is_configured": true, 00:21:21.692 "data_offset": 2048, 00:21:21.692 "data_size": 63488 00:21:21.692 } 00:21:21.692 ] 00:21:21.692 }' 00:21:21.692 12:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.692 12:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.258 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:22.258 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:22.516 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:22.516 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:22.516 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:22.775 [2024-07-25 12:03:08.727170] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 8b6d5127-8d39-4b3f-8c1b-59abf17f4c76 '!=' 8b6d5127-8d39-4b3f-8c1b-59abf17f4c76 ']' 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 18671 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 18671 ']' 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 18671 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 18671 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 18671' 00:21:22.775 killing process with pid 18671 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 18671 00:21:22.775 [2024-07-25 12:03:08.802961] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.775 [2024-07-25 12:03:08.803015] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.775 [2024-07-25 12:03:08.803074] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.775 [2024-07-25 12:03:08.803087] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22664c0 name raid_bdev1, state offline 00:21:22.775 12:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 18671 00:21:22.775 [2024-07-25 12:03:08.835608] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:23.034 12:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:23.034 00:21:23.034 real 0m23.505s 00:21:23.034 user 0m43.007s 00:21:23.034 sys 0m4.102s 00:21:23.034 12:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.034 12:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.034 ************************************ 00:21:23.034 END TEST raid_superblock_test 00:21:23.034 ************************************ 00:21:23.034 12:03:09 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:21:23.034 12:03:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:23.034 12:03:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:23.034 12:03:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:23.034 ************************************ 00:21:23.034 START TEST raid_read_error_test 00:21:23.034 ************************************ 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:23.034 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZFzfijsfRZ 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=23520 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 23520 /var/tmp/spdk-raid.sock 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 23520 ']' 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:23.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.035 12:03:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.294 [2024-07-25 12:03:09.174868] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:23.294 [2024-07-25 12:03:09.174926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23520 ] 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:23.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.294 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:23.294 [2024-07-25 12:03:09.306563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.294 [2024-07-25 12:03:09.396760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.552 [2024-07-25 12:03:09.451623] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.552 [2024-07-25 12:03:09.451649] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.119 12:03:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.119 12:03:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:24.119 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.119 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:24.377 BaseBdev1_malloc 00:21:24.377 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:24.636 true 00:21:24.636 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:24.636 [2024-07-25 12:03:10.748130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:24.636 [2024-07-25 12:03:10.748179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.636 [2024-07-25 12:03:10.748197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14aa190 00:21:24.636 [2024-07-25 12:03:10.748208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.636 [2024-07-25 12:03:10.749791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.636 [2024-07-25 12:03:10.749817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.636 BaseBdev1 00:21:24.925 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.925 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:24.925 BaseBdev2_malloc 00:21:24.925 12:03:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:25.183 true 00:21:25.183 12:03:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:25.442 [2024-07-25 12:03:11.426325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:25.442 [2024-07-25 12:03:11.426365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.442 [2024-07-25 12:03:11.426383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14aee20 00:21:25.442 [2024-07-25 12:03:11.426394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.442 [2024-07-25 12:03:11.427776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.442 [2024-07-25 12:03:11.427802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.442 BaseBdev2 00:21:25.442 12:03:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.442 12:03:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:25.701 BaseBdev3_malloc 00:21:25.701 12:03:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:25.960 true 00:21:25.960 12:03:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:26.219 [2024-07-25 12:03:12.108421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:26.219 [2024-07-25 12:03:12.108460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.219 [2024-07-25 12:03:12.108481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14afd90 00:21:26.219 [2024-07-25 12:03:12.108492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.220 [2024-07-25 12:03:12.109879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.220 [2024-07-25 12:03:12.109904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:26.220 BaseBdev3 00:21:26.220 12:03:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:26.220 12:03:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:26.478 BaseBdev4_malloc 00:21:26.478 12:03:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:26.478 true 00:21:26.478 12:03:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:26.737 [2024-07-25 12:03:12.786535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:26.737 [2024-07-25 12:03:12.786575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.737 [2024-07-25 12:03:12.786592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b2000 00:21:26.737 [2024-07-25 12:03:12.786603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.737 [2024-07-25 12:03:12.788000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.737 [2024-07-25 12:03:12.788026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:26.737 BaseBdev4 00:21:26.737 12:03:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:26.996 [2024-07-25 12:03:13.007146] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.996 [2024-07-25 12:03:13.008299] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.996 [2024-07-25 12:03:13.008363] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.996 [2024-07-25 12:03:13.008416] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:26.996 [2024-07-25 12:03:13.008631] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b2dd0 00:21:26.996 [2024-07-25 12:03:13.008641] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:26.996 [2024-07-25 12:03:13.008821] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14b4080 00:21:26.996 [2024-07-25 12:03:13.008966] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b2dd0 00:21:26.996 [2024-07-25 12:03:13.008975] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b2dd0 00:21:26.996 [2024-07-25 12:03:13.009066] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.996 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.997 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.997 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.255 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.255 "name": "raid_bdev1", 00:21:27.255 "uuid": "1789957b-8696-412f-8667-0b3f9976eda0", 00:21:27.256 "strip_size_kb": 0, 00:21:27.256 "state": "online", 00:21:27.256 "raid_level": "raid1", 00:21:27.256 "superblock": true, 00:21:27.256 "num_base_bdevs": 4, 00:21:27.256 "num_base_bdevs_discovered": 4, 00:21:27.256 "num_base_bdevs_operational": 4, 00:21:27.256 "base_bdevs_list": [ 00:21:27.256 { 00:21:27.256 "name": "BaseBdev1", 00:21:27.256 "uuid": "ed8a39ac-78eb-517c-855a-c3e6ebde2937", 00:21:27.256 "is_configured": true, 00:21:27.256 "data_offset": 2048, 00:21:27.256 "data_size": 63488 00:21:27.256 }, 00:21:27.256 { 00:21:27.256 "name": "BaseBdev2", 00:21:27.256 "uuid": "570328a8-f3a3-5ac4-b4ad-5e8d9173e9b6", 00:21:27.256 "is_configured": true, 00:21:27.256 "data_offset": 2048, 00:21:27.256 "data_size": 63488 00:21:27.256 }, 00:21:27.256 { 00:21:27.256 "name": "BaseBdev3", 00:21:27.256 "uuid": "6d81fa04-2893-5319-94f7-00b75c28d289", 00:21:27.256 "is_configured": true, 00:21:27.256 "data_offset": 2048, 00:21:27.256 "data_size": 63488 00:21:27.256 }, 00:21:27.256 { 00:21:27.256 "name": "BaseBdev4", 00:21:27.256 "uuid": "15d2ccdd-f820-58fa-97d7-50619200e6fc", 00:21:27.256 "is_configured": true, 00:21:27.256 "data_offset": 2048, 00:21:27.256 "data_size": 63488 00:21:27.256 } 00:21:27.256 ] 00:21:27.256 }' 00:21:27.256 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.256 12:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.822 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:27.822 12:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:27.822 [2024-07-25 12:03:13.925789] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14b4080 00:21:28.759 12:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.018 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.278 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.278 "name": "raid_bdev1", 00:21:29.278 "uuid": "1789957b-8696-412f-8667-0b3f9976eda0", 00:21:29.278 "strip_size_kb": 0, 00:21:29.278 "state": "online", 00:21:29.278 "raid_level": "raid1", 00:21:29.278 "superblock": true, 00:21:29.278 "num_base_bdevs": 4, 00:21:29.278 "num_base_bdevs_discovered": 4, 00:21:29.278 "num_base_bdevs_operational": 4, 00:21:29.278 "base_bdevs_list": [ 00:21:29.278 { 00:21:29.278 "name": "BaseBdev1", 00:21:29.278 "uuid": "ed8a39ac-78eb-517c-855a-c3e6ebde2937", 00:21:29.278 "is_configured": true, 00:21:29.278 "data_offset": 2048, 00:21:29.278 "data_size": 63488 00:21:29.278 }, 00:21:29.278 { 00:21:29.278 "name": "BaseBdev2", 00:21:29.278 "uuid": "570328a8-f3a3-5ac4-b4ad-5e8d9173e9b6", 00:21:29.278 "is_configured": true, 00:21:29.278 "data_offset": 2048, 00:21:29.278 "data_size": 63488 00:21:29.278 }, 00:21:29.278 { 00:21:29.278 "name": "BaseBdev3", 00:21:29.278 "uuid": "6d81fa04-2893-5319-94f7-00b75c28d289", 00:21:29.278 "is_configured": true, 00:21:29.278 "data_offset": 2048, 00:21:29.278 "data_size": 63488 00:21:29.278 }, 00:21:29.278 { 00:21:29.278 "name": "BaseBdev4", 00:21:29.278 "uuid": "15d2ccdd-f820-58fa-97d7-50619200e6fc", 00:21:29.278 "is_configured": true, 00:21:29.278 "data_offset": 2048, 00:21:29.278 "data_size": 63488 00:21:29.278 } 00:21:29.278 ] 00:21:29.278 }' 00:21:29.278 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.278 12:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.846 12:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:30.105 [2024-07-25 12:03:16.084375] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:30.105 [2024-07-25 12:03:16.084409] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.105 [2024-07-25 12:03:16.087298] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.105 [2024-07-25 12:03:16.087339] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.105 [2024-07-25 12:03:16.087445] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.105 [2024-07-25 12:03:16.087456] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b2dd0 name raid_bdev1, state offline 00:21:30.105 0 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 23520 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 23520 ']' 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 23520 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 23520 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 23520' 00:21:30.106 killing process with pid 23520 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 23520 00:21:30.106 [2024-07-25 12:03:16.161202] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.106 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 23520 00:21:30.106 [2024-07-25 12:03:16.187981] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZFzfijsfRZ 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:30.365 00:21:30.365 real 0m7.295s 00:21:30.365 user 0m11.622s 00:21:30.365 sys 0m1.296s 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.365 12:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 ************************************ 00:21:30.365 END TEST raid_read_error_test 00:21:30.365 ************************************ 00:21:30.365 12:03:16 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:21:30.365 12:03:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:30.365 12:03:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.365 12:03:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 ************************************ 00:21:30.365 START TEST raid_write_error_test 00:21:30.365 ************************************ 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:30.365 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NfIV91kmbI 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=24918 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 24918 /var/tmp/spdk-raid.sock 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 24918 ']' 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:30.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.625 12:03:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.625 [2024-07-25 12:03:16.548584] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:30.625 [2024-07-25 12:03:16.548638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24918 ] 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.625 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:30.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:30.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:30.626 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:30.626 [2024-07-25 12:03:16.679546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.885 [2024-07-25 12:03:16.766507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.885 [2024-07-25 12:03:16.828260] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.885 [2024-07-25 12:03:16.828303] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.453 12:03:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.453 12:03:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:31.453 12:03:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:31.453 12:03:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:31.712 BaseBdev1_malloc 00:21:31.712 12:03:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:31.971 true 00:21:31.971 12:03:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:32.230 [2024-07-25 12:03:18.093583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:32.230 [2024-07-25 12:03:18.093622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.230 [2024-07-25 12:03:18.093640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd22190 00:21:32.230 [2024-07-25 12:03:18.093651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.230 [2024-07-25 12:03:18.095218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.230 [2024-07-25 12:03:18.095244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:32.230 BaseBdev1 00:21:32.230 12:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:32.230 12:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:32.230 BaseBdev2_malloc 00:21:32.230 12:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:32.489 true 00:21:32.489 12:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:32.746 [2024-07-25 12:03:18.771615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:32.746 [2024-07-25 12:03:18.771653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.746 [2024-07-25 12:03:18.771671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd26e20 00:21:32.746 [2024-07-25 12:03:18.771682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.746 [2024-07-25 12:03:18.773066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.746 [2024-07-25 12:03:18.773093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:32.746 BaseBdev2 00:21:32.746 12:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:32.746 12:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:33.005 BaseBdev3_malloc 00:21:33.005 12:03:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:33.263 true 00:21:33.263 12:03:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:33.522 [2024-07-25 12:03:19.449628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:33.522 [2024-07-25 12:03:19.449666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.522 [2024-07-25 12:03:19.449686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd27d90 00:21:33.522 [2024-07-25 12:03:19.449697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.522 [2024-07-25 12:03:19.451067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.522 [2024-07-25 12:03:19.451092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:33.522 BaseBdev3 00:21:33.522 12:03:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:33.522 12:03:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:33.781 BaseBdev4_malloc 00:21:33.781 12:03:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:34.039 true 00:21:34.039 12:03:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:34.039 [2024-07-25 12:03:20.115694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:34.039 [2024-07-25 12:03:20.115735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.039 [2024-07-25 12:03:20.115753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd2a000 00:21:34.039 [2024-07-25 12:03:20.115765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.039 [2024-07-25 12:03:20.117191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.039 [2024-07-25 12:03:20.117217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:34.039 BaseBdev4 00:21:34.039 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:34.298 [2024-07-25 12:03:20.328278] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.299 [2024-07-25 12:03:20.329392] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.299 [2024-07-25 12:03:20.329455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.299 [2024-07-25 12:03:20.329508] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:34.299 [2024-07-25 12:03:20.329728] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd2add0 00:21:34.299 [2024-07-25 12:03:20.329738] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:34.299 [2024-07-25 12:03:20.329909] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd2c080 00:21:34.299 [2024-07-25 12:03:20.330049] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd2add0 00:21:34.299 [2024-07-25 12:03:20.330058] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd2add0 00:21:34.299 [2024-07-25 12:03:20.330160] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.299 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.557 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.557 "name": "raid_bdev1", 00:21:34.557 "uuid": "0fdf000c-3c0d-4c24-86bf-c58e4c4c0a04", 00:21:34.557 "strip_size_kb": 0, 00:21:34.557 "state": "online", 00:21:34.557 "raid_level": "raid1", 00:21:34.557 "superblock": true, 00:21:34.557 "num_base_bdevs": 4, 00:21:34.557 "num_base_bdevs_discovered": 4, 00:21:34.557 "num_base_bdevs_operational": 4, 00:21:34.557 "base_bdevs_list": [ 00:21:34.557 { 00:21:34.557 "name": "BaseBdev1", 00:21:34.557 "uuid": "28755e7e-749d-5448-b638-b6b0ce2efed5", 00:21:34.557 "is_configured": true, 00:21:34.557 "data_offset": 2048, 00:21:34.557 "data_size": 63488 00:21:34.557 }, 00:21:34.557 { 00:21:34.557 "name": "BaseBdev2", 00:21:34.557 "uuid": "5ee369d7-995d-5659-b7ed-89f2cb2b9964", 00:21:34.557 "is_configured": true, 00:21:34.557 "data_offset": 2048, 00:21:34.557 "data_size": 63488 00:21:34.557 }, 00:21:34.557 { 00:21:34.557 "name": "BaseBdev3", 00:21:34.557 "uuid": "a48d927b-3b3f-5deb-a073-c9d1b2e33153", 00:21:34.557 "is_configured": true, 00:21:34.557 "data_offset": 2048, 00:21:34.557 "data_size": 63488 00:21:34.557 }, 00:21:34.557 { 00:21:34.557 "name": "BaseBdev4", 00:21:34.557 "uuid": "61c67e0e-ab9f-5de5-8926-e7c7f68d0dc6", 00:21:34.557 "is_configured": true, 00:21:34.557 "data_offset": 2048, 00:21:34.557 "data_size": 63488 00:21:34.557 } 00:21:34.557 ] 00:21:34.557 }' 00:21:34.557 12:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.557 12:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.125 12:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:35.125 12:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:35.125 [2024-07-25 12:03:21.186775] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd2c080 00:21:36.059 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:36.319 [2024-07-25 12:03:22.302416] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:36.319 [2024-07-25 12:03:22.302467] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:36.319 [2024-07-25 12:03:22.302672] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xd2c080 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.319 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.577 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.577 "name": "raid_bdev1", 00:21:36.577 "uuid": "0fdf000c-3c0d-4c24-86bf-c58e4c4c0a04", 00:21:36.577 "strip_size_kb": 0, 00:21:36.577 "state": "online", 00:21:36.577 "raid_level": "raid1", 00:21:36.577 "superblock": true, 00:21:36.577 "num_base_bdevs": 4, 00:21:36.577 "num_base_bdevs_discovered": 3, 00:21:36.577 "num_base_bdevs_operational": 3, 00:21:36.577 "base_bdevs_list": [ 00:21:36.577 { 00:21:36.577 "name": null, 00:21:36.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.577 "is_configured": false, 00:21:36.577 "data_offset": 2048, 00:21:36.577 "data_size": 63488 00:21:36.577 }, 00:21:36.577 { 00:21:36.577 "name": "BaseBdev2", 00:21:36.577 "uuid": "5ee369d7-995d-5659-b7ed-89f2cb2b9964", 00:21:36.577 "is_configured": true, 00:21:36.577 "data_offset": 2048, 00:21:36.577 "data_size": 63488 00:21:36.577 }, 00:21:36.577 { 00:21:36.577 "name": "BaseBdev3", 00:21:36.577 "uuid": "a48d927b-3b3f-5deb-a073-c9d1b2e33153", 00:21:36.578 "is_configured": true, 00:21:36.578 "data_offset": 2048, 00:21:36.578 "data_size": 63488 00:21:36.578 }, 00:21:36.578 { 00:21:36.578 "name": "BaseBdev4", 00:21:36.578 "uuid": "61c67e0e-ab9f-5de5-8926-e7c7f68d0dc6", 00:21:36.578 "is_configured": true, 00:21:36.578 "data_offset": 2048, 00:21:36.578 "data_size": 63488 00:21:36.578 } 00:21:36.578 ] 00:21:36.578 }' 00:21:36.578 12:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.578 12:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:37.144 [2024-07-25 12:03:23.206207] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.144 [2024-07-25 12:03:23.206242] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.144 [2024-07-25 12:03:23.209116] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.144 [2024-07-25 12:03:23.209159] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.144 [2024-07-25 12:03:23.209247] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.144 [2024-07-25 12:03:23.209257] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd2add0 name raid_bdev1, state offline 00:21:37.144 0 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 24918 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 24918 ']' 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 24918 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.144 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 24918 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 24918' 00:21:37.402 killing process with pid 24918 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 24918 00:21:37.402 [2024-07-25 12:03:23.276608] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 24918 00:21:37.402 [2024-07-25 12:03:23.303625] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NfIV91kmbI 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:37.402 00:21:37.402 real 0m7.034s 00:21:37.402 user 0m11.167s 00:21:37.402 sys 0m1.205s 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.402 12:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.402 ************************************ 00:21:37.402 END TEST raid_write_error_test 00:21:37.402 ************************************ 00:21:37.660 12:03:23 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:21:37.660 12:03:23 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:21:37.660 12:03:23 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:21:37.660 12:03:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:37.660 12:03:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.660 12:03:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.660 ************************************ 00:21:37.660 START TEST raid_rebuild_test 00:21:37.660 ************************************ 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=26148 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 26148 /var/tmp/spdk-raid.sock 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 26148 ']' 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:37.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.660 12:03:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.660 [2024-07-25 12:03:23.657883] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:37.660 [2024-07-25 12:03:23.657940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid26148 ] 00:21:37.660 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:37.660 Zero copy mechanism will not be used. 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:37.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:37.660 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:37.920 [2024-07-25 12:03:23.791364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.920 [2024-07-25 12:03:23.879265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.920 [2024-07-25 12:03:23.934842] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.920 [2024-07-25 12:03:23.934875] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.486 12:03:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.486 12:03:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:21:38.486 12:03:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:38.486 12:03:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:38.744 BaseBdev1_malloc 00:21:38.744 12:03:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:39.002 [2024-07-25 12:03:24.954793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:39.002 [2024-07-25 12:03:24.954835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.002 [2024-07-25 12:03:24.954853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20785f0 00:21:39.002 [2024-07-25 12:03:24.954865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.002 [2024-07-25 12:03:24.956370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.002 [2024-07-25 12:03:24.956396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:39.002 BaseBdev1 00:21:39.002 12:03:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:39.002 12:03:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:39.260 BaseBdev2_malloc 00:21:39.260 12:03:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:39.260 [2024-07-25 12:03:25.332325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:39.260 [2024-07-25 12:03:25.332365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.260 [2024-07-25 12:03:25.332382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x221c130 00:21:39.260 [2024-07-25 12:03:25.332393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.260 [2024-07-25 12:03:25.333792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.260 [2024-07-25 12:03:25.333817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:39.260 BaseBdev2 00:21:39.260 12:03:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:39.518 spare_malloc 00:21:39.518 12:03:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:39.776 spare_delay 00:21:39.776 12:03:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:40.034 [2024-07-25 12:03:25.978369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:40.034 [2024-07-25 12:03:25.978408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.034 [2024-07-25 12:03:25.978425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x221b770 00:21:40.034 [2024-07-25 12:03:25.978441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.034 [2024-07-25 12:03:25.979781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.034 [2024-07-25 12:03:25.979808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:40.034 spare 00:21:40.034 12:03:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:40.034 [2024-07-25 12:03:26.138816] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.034 [2024-07-25 12:03:26.139982] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.034 [2024-07-25 12:03:26.140049] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2070270 00:21:40.034 [2024-07-25 12:03:26.140060] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:40.034 [2024-07-25 12:03:26.140249] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x221c3c0 00:21:40.034 [2024-07-25 12:03:26.140382] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2070270 00:21:40.034 [2024-07-25 12:03:26.140391] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2070270 00:21:40.034 [2024-07-25 12:03:26.140494] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.293 "name": "raid_bdev1", 00:21:40.293 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:40.293 "strip_size_kb": 0, 00:21:40.293 "state": "online", 00:21:40.293 "raid_level": "raid1", 00:21:40.293 "superblock": false, 00:21:40.293 "num_base_bdevs": 2, 00:21:40.293 "num_base_bdevs_discovered": 2, 00:21:40.293 "num_base_bdevs_operational": 2, 00:21:40.293 "base_bdevs_list": [ 00:21:40.293 { 00:21:40.293 "name": "BaseBdev1", 00:21:40.293 "uuid": "5f2d1353-d2a6-507a-8882-84af3e96b61f", 00:21:40.293 "is_configured": true, 00:21:40.293 "data_offset": 0, 00:21:40.293 "data_size": 65536 00:21:40.293 }, 00:21:40.293 { 00:21:40.293 "name": "BaseBdev2", 00:21:40.293 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:40.293 "is_configured": true, 00:21:40.293 "data_offset": 0, 00:21:40.293 "data_size": 65536 00:21:40.293 } 00:21:40.293 ] 00:21:40.293 }' 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.293 12:03:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.859 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:40.859 12:03:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:41.118 [2024-07-25 12:03:27.061461] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.118 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:21:41.118 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.118 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:41.377 [2024-07-25 12:03:27.442274] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x221c3c0 00:21:41.377 /dev/nbd0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.377 1+0 records in 00:21:41.377 1+0 records out 00:21:41.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218162 s, 18.8 MB/s 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:41.377 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.636 12:03:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.636 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:41.636 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:41.636 12:03:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:45.838 65536+0 records in 00:21:45.838 65536+0 records out 00:21:45.838 33554432 bytes (34 MB, 32 MiB) copied, 4.12389 s, 8.1 MB/s 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.838 [2024-07-25 12:03:31.874328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.838 12:03:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:46.096 [2024-07-25 12:03:32.094928] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.096 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.355 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.355 "name": "raid_bdev1", 00:21:46.355 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:46.355 "strip_size_kb": 0, 00:21:46.355 "state": "online", 00:21:46.355 "raid_level": "raid1", 00:21:46.355 "superblock": false, 00:21:46.355 "num_base_bdevs": 2, 00:21:46.355 "num_base_bdevs_discovered": 1, 00:21:46.355 "num_base_bdevs_operational": 1, 00:21:46.355 "base_bdevs_list": [ 00:21:46.355 { 00:21:46.355 "name": null, 00:21:46.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.355 "is_configured": false, 00:21:46.355 "data_offset": 0, 00:21:46.355 "data_size": 65536 00:21:46.355 }, 00:21:46.355 { 00:21:46.355 "name": "BaseBdev2", 00:21:46.355 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:46.355 "is_configured": true, 00:21:46.355 "data_offset": 0, 00:21:46.355 "data_size": 65536 00:21:46.355 } 00:21:46.355 ] 00:21:46.355 }' 00:21:46.355 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.355 12:03:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.922 12:03:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:47.181 [2024-07-25 12:03:33.117638] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:47.181 [2024-07-25 12:03:33.122426] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22108f0 00:21:47.181 [2024-07-25 12:03:33.124500] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:47.181 12:03:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.117 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.375 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:48.375 "name": "raid_bdev1", 00:21:48.375 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:48.375 "strip_size_kb": 0, 00:21:48.375 "state": "online", 00:21:48.375 "raid_level": "raid1", 00:21:48.376 "superblock": false, 00:21:48.376 "num_base_bdevs": 2, 00:21:48.376 "num_base_bdevs_discovered": 2, 00:21:48.376 "num_base_bdevs_operational": 2, 00:21:48.376 "process": { 00:21:48.376 "type": "rebuild", 00:21:48.376 "target": "spare", 00:21:48.376 "progress": { 00:21:48.376 "blocks": 24576, 00:21:48.376 "percent": 37 00:21:48.376 } 00:21:48.376 }, 00:21:48.376 "base_bdevs_list": [ 00:21:48.376 { 00:21:48.376 "name": "spare", 00:21:48.376 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:48.376 "is_configured": true, 00:21:48.376 "data_offset": 0, 00:21:48.376 "data_size": 65536 00:21:48.376 }, 00:21:48.376 { 00:21:48.376 "name": "BaseBdev2", 00:21:48.376 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:48.376 "is_configured": true, 00:21:48.376 "data_offset": 0, 00:21:48.376 "data_size": 65536 00:21:48.376 } 00:21:48.376 ] 00:21:48.376 }' 00:21:48.376 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:48.376 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.376 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:48.376 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.376 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:48.634 [2024-07-25 12:03:34.663086] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.634 [2024-07-25 12:03:34.736128] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:48.634 [2024-07-25 12:03:34.736185] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.634 [2024-07-25 12:03:34.736200] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.634 [2024-07-25 12:03:34.736208] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.892 "name": "raid_bdev1", 00:21:48.892 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:48.892 "strip_size_kb": 0, 00:21:48.892 "state": "online", 00:21:48.892 "raid_level": "raid1", 00:21:48.892 "superblock": false, 00:21:48.892 "num_base_bdevs": 2, 00:21:48.892 "num_base_bdevs_discovered": 1, 00:21:48.892 "num_base_bdevs_operational": 1, 00:21:48.892 "base_bdevs_list": [ 00:21:48.892 { 00:21:48.892 "name": null, 00:21:48.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.892 "is_configured": false, 00:21:48.892 "data_offset": 0, 00:21:48.892 "data_size": 65536 00:21:48.892 }, 00:21:48.892 { 00:21:48.892 "name": "BaseBdev2", 00:21:48.892 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:48.892 "is_configured": true, 00:21:48.892 "data_offset": 0, 00:21:48.892 "data_size": 65536 00:21:48.892 } 00:21:48.892 ] 00:21:48.892 }' 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.892 12:03:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.459 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.718 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:49.718 "name": "raid_bdev1", 00:21:49.718 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:49.718 "strip_size_kb": 0, 00:21:49.718 "state": "online", 00:21:49.718 "raid_level": "raid1", 00:21:49.718 "superblock": false, 00:21:49.718 "num_base_bdevs": 2, 00:21:49.718 "num_base_bdevs_discovered": 1, 00:21:49.718 "num_base_bdevs_operational": 1, 00:21:49.718 "base_bdevs_list": [ 00:21:49.718 { 00:21:49.718 "name": null, 00:21:49.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.718 "is_configured": false, 00:21:49.718 "data_offset": 0, 00:21:49.718 "data_size": 65536 00:21:49.718 }, 00:21:49.718 { 00:21:49.718 "name": "BaseBdev2", 00:21:49.718 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:49.718 "is_configured": true, 00:21:49.718 "data_offset": 0, 00:21:49.718 "data_size": 65536 00:21:49.718 } 00:21:49.718 ] 00:21:49.718 }' 00:21:49.718 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:49.718 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:49.718 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:49.718 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:49.718 12:03:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.977 [2024-07-25 12:03:36.031200] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.977 [2024-07-25 12:03:36.035917] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22108f0 00:21:49.977 [2024-07-25 12:03:36.037275] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:49.977 12:03:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:51.354 "name": "raid_bdev1", 00:21:51.354 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:51.354 "strip_size_kb": 0, 00:21:51.354 "state": "online", 00:21:51.354 "raid_level": "raid1", 00:21:51.354 "superblock": false, 00:21:51.354 "num_base_bdevs": 2, 00:21:51.354 "num_base_bdevs_discovered": 2, 00:21:51.354 "num_base_bdevs_operational": 2, 00:21:51.354 "process": { 00:21:51.354 "type": "rebuild", 00:21:51.354 "target": "spare", 00:21:51.354 "progress": { 00:21:51.354 "blocks": 24576, 00:21:51.354 "percent": 37 00:21:51.354 } 00:21:51.354 }, 00:21:51.354 "base_bdevs_list": [ 00:21:51.354 { 00:21:51.354 "name": "spare", 00:21:51.354 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:51.354 "is_configured": true, 00:21:51.354 "data_offset": 0, 00:21:51.354 "data_size": 65536 00:21:51.354 }, 00:21:51.354 { 00:21:51.354 "name": "BaseBdev2", 00:21:51.354 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:51.354 "is_configured": true, 00:21:51.354 "data_offset": 0, 00:21:51.354 "data_size": 65536 00:21:51.354 } 00:21:51.354 ] 00:21:51.354 }' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=722 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.354 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.613 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:51.613 "name": "raid_bdev1", 00:21:51.613 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:51.613 "strip_size_kb": 0, 00:21:51.613 "state": "online", 00:21:51.613 "raid_level": "raid1", 00:21:51.613 "superblock": false, 00:21:51.613 "num_base_bdevs": 2, 00:21:51.613 "num_base_bdevs_discovered": 2, 00:21:51.613 "num_base_bdevs_operational": 2, 00:21:51.613 "process": { 00:21:51.613 "type": "rebuild", 00:21:51.613 "target": "spare", 00:21:51.613 "progress": { 00:21:51.613 "blocks": 30720, 00:21:51.613 "percent": 46 00:21:51.613 } 00:21:51.613 }, 00:21:51.613 "base_bdevs_list": [ 00:21:51.613 { 00:21:51.613 "name": "spare", 00:21:51.613 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:51.613 "is_configured": true, 00:21:51.613 "data_offset": 0, 00:21:51.613 "data_size": 65536 00:21:51.613 }, 00:21:51.613 { 00:21:51.613 "name": "BaseBdev2", 00:21:51.613 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:51.613 "is_configured": true, 00:21:51.613 "data_offset": 0, 00:21:51.613 "data_size": 65536 00:21:51.613 } 00:21:51.613 ] 00:21:51.613 }' 00:21:51.613 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:51.613 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.613 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:51.613 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.613 12:03:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:52.989 "name": "raid_bdev1", 00:21:52.989 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:52.989 "strip_size_kb": 0, 00:21:52.989 "state": "online", 00:21:52.989 "raid_level": "raid1", 00:21:52.989 "superblock": false, 00:21:52.989 "num_base_bdevs": 2, 00:21:52.989 "num_base_bdevs_discovered": 2, 00:21:52.989 "num_base_bdevs_operational": 2, 00:21:52.989 "process": { 00:21:52.989 "type": "rebuild", 00:21:52.989 "target": "spare", 00:21:52.989 "progress": { 00:21:52.989 "blocks": 57344, 00:21:52.989 "percent": 87 00:21:52.989 } 00:21:52.989 }, 00:21:52.989 "base_bdevs_list": [ 00:21:52.989 { 00:21:52.989 "name": "spare", 00:21:52.989 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:52.989 "is_configured": true, 00:21:52.989 "data_offset": 0, 00:21:52.989 "data_size": 65536 00:21:52.989 }, 00:21:52.989 { 00:21:52.989 "name": "BaseBdev2", 00:21:52.989 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:52.989 "is_configured": true, 00:21:52.989 "data_offset": 0, 00:21:52.989 "data_size": 65536 00:21:52.989 } 00:21:52.989 ] 00:21:52.989 }' 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.989 12:03:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:53.248 [2024-07-25 12:03:39.260423] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:53.248 [2024-07-25 12:03:39.260475] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:53.248 [2024-07-25 12:03:39.260514] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.224 12:03:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:54.225 12:03:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.225 12:03:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:54.225 12:03:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:54.225 12:03:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:54.225 12:03:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:54.225 "name": "raid_bdev1", 00:21:54.225 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:54.225 "strip_size_kb": 0, 00:21:54.225 "state": "online", 00:21:54.225 "raid_level": "raid1", 00:21:54.225 "superblock": false, 00:21:54.225 "num_base_bdevs": 2, 00:21:54.225 "num_base_bdevs_discovered": 2, 00:21:54.225 "num_base_bdevs_operational": 2, 00:21:54.225 "base_bdevs_list": [ 00:21:54.225 { 00:21:54.225 "name": "spare", 00:21:54.225 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:54.225 "is_configured": true, 00:21:54.225 "data_offset": 0, 00:21:54.225 "data_size": 65536 00:21:54.225 }, 00:21:54.225 { 00:21:54.225 "name": "BaseBdev2", 00:21:54.225 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:54.225 "is_configured": true, 00:21:54.225 "data_offset": 0, 00:21:54.225 "data_size": 65536 00:21:54.225 } 00:21:54.225 ] 00:21:54.225 }' 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.225 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.483 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:54.483 "name": "raid_bdev1", 00:21:54.483 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:54.483 "strip_size_kb": 0, 00:21:54.483 "state": "online", 00:21:54.483 "raid_level": "raid1", 00:21:54.483 "superblock": false, 00:21:54.483 "num_base_bdevs": 2, 00:21:54.483 "num_base_bdevs_discovered": 2, 00:21:54.483 "num_base_bdevs_operational": 2, 00:21:54.483 "base_bdevs_list": [ 00:21:54.483 { 00:21:54.483 "name": "spare", 00:21:54.483 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:54.483 "is_configured": true, 00:21:54.483 "data_offset": 0, 00:21:54.483 "data_size": 65536 00:21:54.483 }, 00:21:54.483 { 00:21:54.483 "name": "BaseBdev2", 00:21:54.483 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:54.483 "is_configured": true, 00:21:54.483 "data_offset": 0, 00:21:54.483 "data_size": 65536 00:21:54.483 } 00:21:54.483 ] 00:21:54.483 }' 00:21:54.483 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:54.483 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:54.483 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.741 "name": "raid_bdev1", 00:21:54.741 "uuid": "6df5ad0d-4124-4016-a731-3662ee45961c", 00:21:54.741 "strip_size_kb": 0, 00:21:54.741 "state": "online", 00:21:54.741 "raid_level": "raid1", 00:21:54.741 "superblock": false, 00:21:54.741 "num_base_bdevs": 2, 00:21:54.741 "num_base_bdevs_discovered": 2, 00:21:54.741 "num_base_bdevs_operational": 2, 00:21:54.741 "base_bdevs_list": [ 00:21:54.741 { 00:21:54.741 "name": "spare", 00:21:54.741 "uuid": "963c04da-7729-53da-91cc-b321ba444a91", 00:21:54.741 "is_configured": true, 00:21:54.741 "data_offset": 0, 00:21:54.741 "data_size": 65536 00:21:54.741 }, 00:21:54.741 { 00:21:54.741 "name": "BaseBdev2", 00:21:54.741 "uuid": "0dae21d4-6fd0-55b5-af58-6548c5a97084", 00:21:54.741 "is_configured": true, 00:21:54.741 "data_offset": 0, 00:21:54.741 "data_size": 65536 00:21:54.741 } 00:21:54.741 ] 00:21:54.741 }' 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.741 12:03:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.307 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:55.565 [2024-07-25 12:03:41.622669] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.565 [2024-07-25 12:03:41.622693] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.565 [2024-07-25 12:03:41.622743] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.565 [2024-07-25 12:03:41.622796] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.565 [2024-07-25 12:03:41.622808] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2070270 name raid_bdev1, state offline 00:21:55.565 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.565 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:21:55.823 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:55.823 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:55.823 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.824 12:03:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:56.081 /dev/nbd0 00:21:56.081 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.081 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.081 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:56.081 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:56.081 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.082 1+0 records in 00:21:56.082 1+0 records out 00:21:56.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180508 s, 22.7 MB/s 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.082 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:56.340 /dev/nbd1 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.340 1+0 records in 00:21:56.340 1+0 records out 00:21:56.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300378 s, 13.6 MB/s 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.340 12:03:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:56.598 12:03:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:56.598 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.598 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.599 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:56.599 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:56.599 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.599 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.857 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 26148 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 26148 ']' 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 26148 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.115 12:03:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 26148 00:21:57.115 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.115 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.115 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 26148' 00:21:57.115 killing process with pid 26148 00:21:57.115 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 26148 00:21:57.115 Received shutdown signal, test time was about 60.000000 seconds 00:21:57.115 00:21:57.115 Latency(us) 00:21:57.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.115 =================================================================================================================== 00:21:57.115 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.115 [2024-07-25 12:03:43.047080] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.115 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 26148 00:21:57.115 [2024-07-25 12:03:43.071298] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:21:57.375 00:21:57.375 real 0m19.672s 00:21:57.375 user 0m26.954s 00:21:57.375 sys 0m3.939s 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.375 ************************************ 00:21:57.375 END TEST raid_rebuild_test 00:21:57.375 ************************************ 00:21:57.375 12:03:43 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:21:57.375 12:03:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:57.375 12:03:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.375 12:03:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:57.375 ************************************ 00:21:57.375 START TEST raid_rebuild_test_sb 00:21:57.375 ************************************ 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=29747 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 29747 /var/tmp/spdk-raid.sock 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 29747 ']' 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:57.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.375 12:03:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.375 [2024-07-25 12:03:43.412548] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:57.375 [2024-07-25 12:03:43.412606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29747 ] 00:21:57.375 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:57.375 Zero copy mechanism will not be used. 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:57.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:57.375 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:57.633 [2024-07-25 12:03:43.545501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.633 [2024-07-25 12:03:43.631357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.633 [2024-07-25 12:03:43.694952] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.633 [2024-07-25 12:03:43.694989] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.198 12:03:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.198 12:03:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:58.198 12:03:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:58.198 12:03:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:58.456 BaseBdev1_malloc 00:21:58.456 12:03:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:58.713 [2024-07-25 12:03:44.749374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:58.713 [2024-07-25 12:03:44.749422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.713 [2024-07-25 12:03:44.749442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22d65f0 00:21:58.713 [2024-07-25 12:03:44.749454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.713 [2024-07-25 12:03:44.750967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.713 [2024-07-25 12:03:44.750995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:58.713 BaseBdev1 00:21:58.713 12:03:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:58.713 12:03:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:58.971 BaseBdev2_malloc 00:21:58.971 12:03:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:59.228 [2024-07-25 12:03:45.210947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:59.228 [2024-07-25 12:03:45.210986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.228 [2024-07-25 12:03:45.211003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x247a130 00:21:59.228 [2024-07-25 12:03:45.211014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.228 [2024-07-25 12:03:45.212325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.228 [2024-07-25 12:03:45.212351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:59.228 BaseBdev2 00:21:59.228 12:03:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:59.486 spare_malloc 00:21:59.486 12:03:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:59.744 spare_delay 00:21:59.744 12:03:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:00.003 [2024-07-25 12:03:45.872775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:00.003 [2024-07-25 12:03:45.872813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.003 [2024-07-25 12:03:45.872830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2479770 00:22:00.003 [2024-07-25 12:03:45.872841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.003 [2024-07-25 12:03:45.874166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.003 [2024-07-25 12:03:45.874192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:00.003 spare 00:22:00.003 12:03:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:00.003 [2024-07-25 12:03:46.049268] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.003 [2024-07-25 12:03:46.050366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.003 [2024-07-25 12:03:46.050511] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x22ce270 00:22:00.003 [2024-07-25 12:03:46.050523] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:00.003 [2024-07-25 12:03:46.050684] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x247a3c0 00:22:00.003 [2024-07-25 12:03:46.050808] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22ce270 00:22:00.003 [2024-07-25 12:03:46.050817] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22ce270 00:22:00.003 [2024-07-25 12:03:46.050901] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.003 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.261 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.261 "name": "raid_bdev1", 00:22:00.261 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:00.261 "strip_size_kb": 0, 00:22:00.261 "state": "online", 00:22:00.261 "raid_level": "raid1", 00:22:00.261 "superblock": true, 00:22:00.261 "num_base_bdevs": 2, 00:22:00.261 "num_base_bdevs_discovered": 2, 00:22:00.261 "num_base_bdevs_operational": 2, 00:22:00.261 "base_bdevs_list": [ 00:22:00.261 { 00:22:00.261 "name": "BaseBdev1", 00:22:00.261 "uuid": "0d917f16-4881-5fdc-a7c4-68258263c331", 00:22:00.261 "is_configured": true, 00:22:00.261 "data_offset": 2048, 00:22:00.261 "data_size": 63488 00:22:00.261 }, 00:22:00.261 { 00:22:00.261 "name": "BaseBdev2", 00:22:00.261 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:00.261 "is_configured": true, 00:22:00.261 "data_offset": 2048, 00:22:00.261 "data_size": 63488 00:22:00.261 } 00:22:00.261 ] 00:22:00.261 }' 00:22:00.261 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.261 12:03:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.828 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:00.828 12:03:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:01.086 [2024-07-25 12:03:47.080184] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.086 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:22:01.086 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.086 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.344 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:01.603 [2024-07-25 12:03:47.537215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x247a3c0 00:22:01.603 /dev/nbd0 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.603 1+0 records in 00:22:01.603 1+0 records out 00:22:01.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241383 s, 17.0 MB/s 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:01.603 12:03:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:05.791 63488+0 records in 00:22:05.791 63488+0 records out 00:22:05.791 32505856 bytes (33 MB, 31 MiB) copied, 3.9941 s, 8.1 MB/s 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:05.791 [2024-07-25 12:03:51.848817] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.791 12:03:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:06.048 [2024-07-25 12:03:52.073449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:06.048 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:06.048 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:06.048 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:06.048 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.048 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.048 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:06.049 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.049 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.049 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.049 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.049 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.049 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.307 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.307 "name": "raid_bdev1", 00:22:06.307 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:06.307 "strip_size_kb": 0, 00:22:06.307 "state": "online", 00:22:06.307 "raid_level": "raid1", 00:22:06.307 "superblock": true, 00:22:06.307 "num_base_bdevs": 2, 00:22:06.307 "num_base_bdevs_discovered": 1, 00:22:06.307 "num_base_bdevs_operational": 1, 00:22:06.307 "base_bdevs_list": [ 00:22:06.307 { 00:22:06.307 "name": null, 00:22:06.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.307 "is_configured": false, 00:22:06.307 "data_offset": 2048, 00:22:06.307 "data_size": 63488 00:22:06.307 }, 00:22:06.307 { 00:22:06.307 "name": "BaseBdev2", 00:22:06.307 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:06.307 "is_configured": true, 00:22:06.307 "data_offset": 2048, 00:22:06.307 "data_size": 63488 00:22:06.307 } 00:22:06.307 ] 00:22:06.307 }' 00:22:06.307 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.307 12:03:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.873 12:03:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:07.131 [2024-07-25 12:03:53.092152] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:07.131 [2024-07-25 12:03:53.096861] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x247a3c0 00:22:07.131 [2024-07-25 12:03:53.098905] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:07.131 12:03:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:08.092 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.093 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:08.093 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:08.093 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:08.093 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:08.093 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.093 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.351 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.351 "name": "raid_bdev1", 00:22:08.351 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:08.351 "strip_size_kb": 0, 00:22:08.351 "state": "online", 00:22:08.351 "raid_level": "raid1", 00:22:08.351 "superblock": true, 00:22:08.351 "num_base_bdevs": 2, 00:22:08.351 "num_base_bdevs_discovered": 2, 00:22:08.351 "num_base_bdevs_operational": 2, 00:22:08.351 "process": { 00:22:08.351 "type": "rebuild", 00:22:08.351 "target": "spare", 00:22:08.351 "progress": { 00:22:08.351 "blocks": 24576, 00:22:08.351 "percent": 38 00:22:08.351 } 00:22:08.351 }, 00:22:08.351 "base_bdevs_list": [ 00:22:08.351 { 00:22:08.351 "name": "spare", 00:22:08.351 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:08.351 "is_configured": true, 00:22:08.351 "data_offset": 2048, 00:22:08.351 "data_size": 63488 00:22:08.351 }, 00:22:08.351 { 00:22:08.351 "name": "BaseBdev2", 00:22:08.351 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:08.351 "is_configured": true, 00:22:08.351 "data_offset": 2048, 00:22:08.351 "data_size": 63488 00:22:08.351 } 00:22:08.351 ] 00:22:08.351 }' 00:22:08.351 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:08.351 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.351 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:08.351 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.351 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:08.610 [2024-07-25 12:03:54.649588] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.610 [2024-07-25 12:03:54.710590] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:08.610 [2024-07-25 12:03:54.710636] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.610 [2024-07-25 12:03:54.710650] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.610 [2024-07-25 12:03:54.710658] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.869 "name": "raid_bdev1", 00:22:08.869 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:08.869 "strip_size_kb": 0, 00:22:08.869 "state": "online", 00:22:08.869 "raid_level": "raid1", 00:22:08.869 "superblock": true, 00:22:08.869 "num_base_bdevs": 2, 00:22:08.869 "num_base_bdevs_discovered": 1, 00:22:08.869 "num_base_bdevs_operational": 1, 00:22:08.869 "base_bdevs_list": [ 00:22:08.869 { 00:22:08.869 "name": null, 00:22:08.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.869 "is_configured": false, 00:22:08.869 "data_offset": 2048, 00:22:08.869 "data_size": 63488 00:22:08.869 }, 00:22:08.869 { 00:22:08.869 "name": "BaseBdev2", 00:22:08.869 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:08.869 "is_configured": true, 00:22:08.869 "data_offset": 2048, 00:22:08.869 "data_size": 63488 00:22:08.869 } 00:22:08.869 ] 00:22:08.869 }' 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.869 12:03:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.485 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.744 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.744 "name": "raid_bdev1", 00:22:09.744 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:09.744 "strip_size_kb": 0, 00:22:09.744 "state": "online", 00:22:09.744 "raid_level": "raid1", 00:22:09.744 "superblock": true, 00:22:09.744 "num_base_bdevs": 2, 00:22:09.744 "num_base_bdevs_discovered": 1, 00:22:09.744 "num_base_bdevs_operational": 1, 00:22:09.744 "base_bdevs_list": [ 00:22:09.744 { 00:22:09.744 "name": null, 00:22:09.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.744 "is_configured": false, 00:22:09.744 "data_offset": 2048, 00:22:09.744 "data_size": 63488 00:22:09.744 }, 00:22:09.744 { 00:22:09.744 "name": "BaseBdev2", 00:22:09.744 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:09.744 "is_configured": true, 00:22:09.744 "data_offset": 2048, 00:22:09.744 "data_size": 63488 00:22:09.744 } 00:22:09.744 ] 00:22:09.744 }' 00:22:09.744 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:09.744 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:09.745 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:09.745 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.745 12:03:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.003 [2024-07-25 12:03:56.050379] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.003 [2024-07-25 12:03:56.055087] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246e8f0 00:22:10.003 [2024-07-25 12:03:56.056457] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.003 12:03:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.379 "name": "raid_bdev1", 00:22:11.379 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:11.379 "strip_size_kb": 0, 00:22:11.379 "state": "online", 00:22:11.379 "raid_level": "raid1", 00:22:11.379 "superblock": true, 00:22:11.379 "num_base_bdevs": 2, 00:22:11.379 "num_base_bdevs_discovered": 2, 00:22:11.379 "num_base_bdevs_operational": 2, 00:22:11.379 "process": { 00:22:11.379 "type": "rebuild", 00:22:11.379 "target": "spare", 00:22:11.379 "progress": { 00:22:11.379 "blocks": 24576, 00:22:11.379 "percent": 38 00:22:11.379 } 00:22:11.379 }, 00:22:11.379 "base_bdevs_list": [ 00:22:11.379 { 00:22:11.379 "name": "spare", 00:22:11.379 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:11.379 "is_configured": true, 00:22:11.379 "data_offset": 2048, 00:22:11.379 "data_size": 63488 00:22:11.379 }, 00:22:11.379 { 00:22:11.379 "name": "BaseBdev2", 00:22:11.379 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:11.379 "is_configured": true, 00:22:11.379 "data_offset": 2048, 00:22:11.379 "data_size": 63488 00:22:11.379 } 00:22:11.379 ] 00:22:11.379 }' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:11.379 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=742 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.379 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.638 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.638 "name": "raid_bdev1", 00:22:11.638 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:11.638 "strip_size_kb": 0, 00:22:11.638 "state": "online", 00:22:11.638 "raid_level": "raid1", 00:22:11.638 "superblock": true, 00:22:11.638 "num_base_bdevs": 2, 00:22:11.638 "num_base_bdevs_discovered": 2, 00:22:11.638 "num_base_bdevs_operational": 2, 00:22:11.638 "process": { 00:22:11.638 "type": "rebuild", 00:22:11.638 "target": "spare", 00:22:11.638 "progress": { 00:22:11.638 "blocks": 30720, 00:22:11.638 "percent": 48 00:22:11.638 } 00:22:11.638 }, 00:22:11.638 "base_bdevs_list": [ 00:22:11.638 { 00:22:11.638 "name": "spare", 00:22:11.638 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:11.638 "is_configured": true, 00:22:11.638 "data_offset": 2048, 00:22:11.638 "data_size": 63488 00:22:11.638 }, 00:22:11.638 { 00:22:11.638 "name": "BaseBdev2", 00:22:11.638 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:11.638 "is_configured": true, 00:22:11.638 "data_offset": 2048, 00:22:11.638 "data_size": 63488 00:22:11.638 } 00:22:11.638 ] 00:22:11.638 }' 00:22:11.638 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:11.638 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.638 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:11.638 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.638 12:03:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.015 "name": "raid_bdev1", 00:22:13.015 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:13.015 "strip_size_kb": 0, 00:22:13.015 "state": "online", 00:22:13.015 "raid_level": "raid1", 00:22:13.015 "superblock": true, 00:22:13.015 "num_base_bdevs": 2, 00:22:13.015 "num_base_bdevs_discovered": 2, 00:22:13.015 "num_base_bdevs_operational": 2, 00:22:13.015 "process": { 00:22:13.015 "type": "rebuild", 00:22:13.015 "target": "spare", 00:22:13.015 "progress": { 00:22:13.015 "blocks": 57344, 00:22:13.015 "percent": 90 00:22:13.015 } 00:22:13.015 }, 00:22:13.015 "base_bdevs_list": [ 00:22:13.015 { 00:22:13.015 "name": "spare", 00:22:13.015 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:13.015 "is_configured": true, 00:22:13.015 "data_offset": 2048, 00:22:13.015 "data_size": 63488 00:22:13.015 }, 00:22:13.015 { 00:22:13.015 "name": "BaseBdev2", 00:22:13.015 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:13.015 "is_configured": true, 00:22:13.015 "data_offset": 2048, 00:22:13.015 "data_size": 63488 00:22:13.015 } 00:22:13.015 ] 00:22:13.015 }' 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.015 12:03:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:13.015 12:03:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.015 12:03:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:13.274 [2024-07-25 12:03:59.178915] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:13.274 [2024-07-25 12:03:59.178969] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:13.274 [2024-07-25 12:03:59.179053] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.210 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.210 "name": "raid_bdev1", 00:22:14.210 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:14.210 "strip_size_kb": 0, 00:22:14.211 "state": "online", 00:22:14.211 "raid_level": "raid1", 00:22:14.211 "superblock": true, 00:22:14.211 "num_base_bdevs": 2, 00:22:14.211 "num_base_bdevs_discovered": 2, 00:22:14.211 "num_base_bdevs_operational": 2, 00:22:14.211 "base_bdevs_list": [ 00:22:14.211 { 00:22:14.211 "name": "spare", 00:22:14.211 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:14.211 "is_configured": true, 00:22:14.211 "data_offset": 2048, 00:22:14.211 "data_size": 63488 00:22:14.211 }, 00:22:14.211 { 00:22:14.211 "name": "BaseBdev2", 00:22:14.211 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:14.211 "is_configured": true, 00:22:14.211 "data_offset": 2048, 00:22:14.211 "data_size": 63488 00:22:14.211 } 00:22:14.211 ] 00:22:14.211 }' 00:22:14.211 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:14.211 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:14.211 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.469 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.469 "name": "raid_bdev1", 00:22:14.469 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:14.469 "strip_size_kb": 0, 00:22:14.469 "state": "online", 00:22:14.469 "raid_level": "raid1", 00:22:14.469 "superblock": true, 00:22:14.469 "num_base_bdevs": 2, 00:22:14.469 "num_base_bdevs_discovered": 2, 00:22:14.469 "num_base_bdevs_operational": 2, 00:22:14.469 "base_bdevs_list": [ 00:22:14.469 { 00:22:14.469 "name": "spare", 00:22:14.469 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:14.470 "is_configured": true, 00:22:14.470 "data_offset": 2048, 00:22:14.470 "data_size": 63488 00:22:14.470 }, 00:22:14.470 { 00:22:14.470 "name": "BaseBdev2", 00:22:14.470 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:14.470 "is_configured": true, 00:22:14.470 "data_offset": 2048, 00:22:14.470 "data_size": 63488 00:22:14.470 } 00:22:14.470 ] 00:22:14.470 }' 00:22:14.470 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.728 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.987 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.987 "name": "raid_bdev1", 00:22:14.987 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:14.987 "strip_size_kb": 0, 00:22:14.987 "state": "online", 00:22:14.987 "raid_level": "raid1", 00:22:14.987 "superblock": true, 00:22:14.987 "num_base_bdevs": 2, 00:22:14.987 "num_base_bdevs_discovered": 2, 00:22:14.987 "num_base_bdevs_operational": 2, 00:22:14.987 "base_bdevs_list": [ 00:22:14.987 { 00:22:14.987 "name": "spare", 00:22:14.987 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:14.987 "is_configured": true, 00:22:14.987 "data_offset": 2048, 00:22:14.987 "data_size": 63488 00:22:14.987 }, 00:22:14.987 { 00:22:14.987 "name": "BaseBdev2", 00:22:14.987 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:14.987 "is_configured": true, 00:22:14.987 "data_offset": 2048, 00:22:14.987 "data_size": 63488 00:22:14.987 } 00:22:14.987 ] 00:22:14.987 }' 00:22:14.987 12:04:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.987 12:04:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.553 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.811 [2024-07-25 12:04:01.673745] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.811 [2024-07-25 12:04:01.673770] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.811 [2024-07-25 12:04:01.673825] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.811 [2024-07-25 12:04:01.673880] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.811 [2024-07-25 12:04:01.673892] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22ce270 name raid_bdev1, state offline 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:15.811 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:16.070 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.070 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:16.070 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.070 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:16.070 12:04:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:16.070 /dev/nbd0 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:16.070 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.328 1+0 records in 00:22:16.328 1+0 records out 00:22:16.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239863 s, 17.1 MB/s 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:16.328 /dev/nbd1 00:22:16.328 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.586 1+0 records in 00:22:16.586 1+0 records out 00:22:16.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344962 s, 11.9 MB/s 00:22:16.586 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.587 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.845 12:04:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:17.428 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:17.699 [2024-07-25 12:04:03.774796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:17.699 [2024-07-25 12:04:03.774840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.699 [2024-07-25 12:04:03.774859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ce4f0 00:22:17.699 [2024-07-25 12:04:03.774870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.699 [2024-07-25 12:04:03.776413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.699 [2024-07-25 12:04:03.776441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:17.699 [2024-07-25 12:04:03.776517] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:17.699 [2024-07-25 12:04:03.776542] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.699 [2024-07-25 12:04:03.776641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.699 spare 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.699 12:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.958 [2024-07-25 12:04:03.876955] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x22cf8b0 00:22:17.958 [2024-07-25 12:04:03.876972] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:17.958 [2024-07-25 12:04:03.877153] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246e8f0 00:22:17.958 [2024-07-25 12:04:03.877290] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22cf8b0 00:22:17.958 [2024-07-25 12:04:03.877300] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22cf8b0 00:22:17.958 [2024-07-25 12:04:03.877401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.958 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.958 "name": "raid_bdev1", 00:22:17.958 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:17.958 "strip_size_kb": 0, 00:22:17.958 "state": "online", 00:22:17.958 "raid_level": "raid1", 00:22:17.958 "superblock": true, 00:22:17.958 "num_base_bdevs": 2, 00:22:17.958 "num_base_bdevs_discovered": 2, 00:22:17.958 "num_base_bdevs_operational": 2, 00:22:17.958 "base_bdevs_list": [ 00:22:17.958 { 00:22:17.958 "name": "spare", 00:22:17.958 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:17.958 "is_configured": true, 00:22:17.958 "data_offset": 2048, 00:22:17.958 "data_size": 63488 00:22:17.958 }, 00:22:17.958 { 00:22:17.958 "name": "BaseBdev2", 00:22:17.958 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:17.958 "is_configured": true, 00:22:17.958 "data_offset": 2048, 00:22:17.958 "data_size": 63488 00:22:17.958 } 00:22:17.958 ] 00:22:17.958 }' 00:22:17.958 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.958 12:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.527 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.785 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:18.785 "name": "raid_bdev1", 00:22:18.785 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:18.785 "strip_size_kb": 0, 00:22:18.785 "state": "online", 00:22:18.785 "raid_level": "raid1", 00:22:18.785 "superblock": true, 00:22:18.785 "num_base_bdevs": 2, 00:22:18.785 "num_base_bdevs_discovered": 2, 00:22:18.785 "num_base_bdevs_operational": 2, 00:22:18.785 "base_bdevs_list": [ 00:22:18.785 { 00:22:18.785 "name": "spare", 00:22:18.785 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:18.785 "is_configured": true, 00:22:18.785 "data_offset": 2048, 00:22:18.785 "data_size": 63488 00:22:18.785 }, 00:22:18.785 { 00:22:18.785 "name": "BaseBdev2", 00:22:18.785 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:18.785 "is_configured": true, 00:22:18.785 "data_offset": 2048, 00:22:18.785 "data_size": 63488 00:22:18.785 } 00:22:18.785 ] 00:22:18.785 }' 00:22:18.785 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:18.785 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:18.785 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:19.044 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:19.044 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.044 12:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:19.044 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.044 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:19.303 [2024-07-25 12:04:05.351046] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.303 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.562 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.562 "name": "raid_bdev1", 00:22:19.562 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:19.562 "strip_size_kb": 0, 00:22:19.562 "state": "online", 00:22:19.562 "raid_level": "raid1", 00:22:19.562 "superblock": true, 00:22:19.562 "num_base_bdevs": 2, 00:22:19.562 "num_base_bdevs_discovered": 1, 00:22:19.562 "num_base_bdevs_operational": 1, 00:22:19.562 "base_bdevs_list": [ 00:22:19.562 { 00:22:19.562 "name": null, 00:22:19.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.562 "is_configured": false, 00:22:19.562 "data_offset": 2048, 00:22:19.562 "data_size": 63488 00:22:19.562 }, 00:22:19.562 { 00:22:19.562 "name": "BaseBdev2", 00:22:19.562 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:19.562 "is_configured": true, 00:22:19.562 "data_offset": 2048, 00:22:19.562 "data_size": 63488 00:22:19.562 } 00:22:19.562 ] 00:22:19.562 }' 00:22:19.562 12:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.562 12:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.130 12:04:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:20.389 [2024-07-25 12:04:06.361737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:20.389 [2024-07-25 12:04:06.361878] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:20.389 [2024-07-25 12:04:06.361893] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:20.389 [2024-07-25 12:04:06.361920] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:20.389 [2024-07-25 12:04:06.366575] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246e8f0 00:22:20.389 [2024-07-25 12:04:06.368705] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:20.389 12:04:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.325 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.584 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:21.584 "name": "raid_bdev1", 00:22:21.584 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:21.584 "strip_size_kb": 0, 00:22:21.584 "state": "online", 00:22:21.584 "raid_level": "raid1", 00:22:21.584 "superblock": true, 00:22:21.584 "num_base_bdevs": 2, 00:22:21.584 "num_base_bdevs_discovered": 2, 00:22:21.584 "num_base_bdevs_operational": 2, 00:22:21.584 "process": { 00:22:21.584 "type": "rebuild", 00:22:21.584 "target": "spare", 00:22:21.584 "progress": { 00:22:21.584 "blocks": 24576, 00:22:21.584 "percent": 38 00:22:21.584 } 00:22:21.584 }, 00:22:21.584 "base_bdevs_list": [ 00:22:21.584 { 00:22:21.584 "name": "spare", 00:22:21.584 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:21.584 "is_configured": true, 00:22:21.584 "data_offset": 2048, 00:22:21.584 "data_size": 63488 00:22:21.584 }, 00:22:21.584 { 00:22:21.584 "name": "BaseBdev2", 00:22:21.584 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:21.584 "is_configured": true, 00:22:21.584 "data_offset": 2048, 00:22:21.584 "data_size": 63488 00:22:21.584 } 00:22:21.584 ] 00:22:21.584 }' 00:22:21.584 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:21.584 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.584 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:21.584 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:21.584 12:04:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:21.842 [2024-07-25 12:04:07.910900] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:22.102 [2024-07-25 12:04:07.980321] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:22.102 [2024-07-25 12:04:07.980366] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.102 [2024-07-25 12:04:07.980380] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:22.102 [2024-07-25 12:04:07.980387] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.102 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.360 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.360 "name": "raid_bdev1", 00:22:22.360 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:22.360 "strip_size_kb": 0, 00:22:22.360 "state": "online", 00:22:22.360 "raid_level": "raid1", 00:22:22.360 "superblock": true, 00:22:22.360 "num_base_bdevs": 2, 00:22:22.360 "num_base_bdevs_discovered": 1, 00:22:22.360 "num_base_bdevs_operational": 1, 00:22:22.360 "base_bdevs_list": [ 00:22:22.360 { 00:22:22.360 "name": null, 00:22:22.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.360 "is_configured": false, 00:22:22.360 "data_offset": 2048, 00:22:22.360 "data_size": 63488 00:22:22.360 }, 00:22:22.360 { 00:22:22.360 "name": "BaseBdev2", 00:22:22.360 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:22.360 "is_configured": true, 00:22:22.360 "data_offset": 2048, 00:22:22.360 "data_size": 63488 00:22:22.360 } 00:22:22.360 ] 00:22:22.360 }' 00:22:22.360 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.360 12:04:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.928 12:04:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:22.928 [2024-07-25 12:04:08.990427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.928 [2024-07-25 12:04:08.990474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.928 [2024-07-25 12:04:08.990494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22cfcf0 00:22:22.928 [2024-07-25 12:04:08.990506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.928 [2024-07-25 12:04:08.990859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.928 [2024-07-25 12:04:08.990875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.928 [2024-07-25 12:04:08.990952] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:22.928 [2024-07-25 12:04:08.990964] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:22.928 [2024-07-25 12:04:08.990974] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:22.928 [2024-07-25 12:04:08.990992] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.928 [2024-07-25 12:04:08.995612] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22ceda0 00:22:22.928 spare 00:22:22.928 [2024-07-25 12:04:08.996972] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:22.928 12:04:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:23.899 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.899 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:23.899 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:23.899 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:23.899 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:24.158 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.158 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.158 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:24.158 "name": "raid_bdev1", 00:22:24.158 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:24.158 "strip_size_kb": 0, 00:22:24.158 "state": "online", 00:22:24.158 "raid_level": "raid1", 00:22:24.158 "superblock": true, 00:22:24.158 "num_base_bdevs": 2, 00:22:24.158 "num_base_bdevs_discovered": 2, 00:22:24.158 "num_base_bdevs_operational": 2, 00:22:24.158 "process": { 00:22:24.158 "type": "rebuild", 00:22:24.158 "target": "spare", 00:22:24.158 "progress": { 00:22:24.158 "blocks": 24576, 00:22:24.158 "percent": 38 00:22:24.158 } 00:22:24.158 }, 00:22:24.158 "base_bdevs_list": [ 00:22:24.158 { 00:22:24.158 "name": "spare", 00:22:24.158 "uuid": "76c23dd3-4467-5fe8-9c85-f0ae2668597c", 00:22:24.158 "is_configured": true, 00:22:24.158 "data_offset": 2048, 00:22:24.158 "data_size": 63488 00:22:24.158 }, 00:22:24.158 { 00:22:24.158 "name": "BaseBdev2", 00:22:24.158 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:24.158 "is_configured": true, 00:22:24.158 "data_offset": 2048, 00:22:24.158 "data_size": 63488 00:22:24.158 } 00:22:24.158 ] 00:22:24.158 }' 00:22:24.158 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:24.417 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.417 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:24.417 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.417 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:24.677 [2024-07-25 12:04:10.544115] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.677 [2024-07-25 12:04:10.608675] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:24.677 [2024-07-25 12:04:10.608720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.677 [2024-07-25 12:04:10.608734] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.677 [2024-07-25 12:04:10.608742] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.677 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.936 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.936 "name": "raid_bdev1", 00:22:24.936 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:24.936 "strip_size_kb": 0, 00:22:24.936 "state": "online", 00:22:24.936 "raid_level": "raid1", 00:22:24.936 "superblock": true, 00:22:24.936 "num_base_bdevs": 2, 00:22:24.936 "num_base_bdevs_discovered": 1, 00:22:24.936 "num_base_bdevs_operational": 1, 00:22:24.936 "base_bdevs_list": [ 00:22:24.936 { 00:22:24.936 "name": null, 00:22:24.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.936 "is_configured": false, 00:22:24.936 "data_offset": 2048, 00:22:24.936 "data_size": 63488 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "name": "BaseBdev2", 00:22:24.936 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:24.936 "is_configured": true, 00:22:24.936 "data_offset": 2048, 00:22:24.936 "data_size": 63488 00:22:24.936 } 00:22:24.936 ] 00:22:24.936 }' 00:22:24.936 12:04:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.936 12:04:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.504 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.763 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:25.763 "name": "raid_bdev1", 00:22:25.763 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:25.763 "strip_size_kb": 0, 00:22:25.763 "state": "online", 00:22:25.763 "raid_level": "raid1", 00:22:25.763 "superblock": true, 00:22:25.763 "num_base_bdevs": 2, 00:22:25.763 "num_base_bdevs_discovered": 1, 00:22:25.763 "num_base_bdevs_operational": 1, 00:22:25.763 "base_bdevs_list": [ 00:22:25.763 { 00:22:25.763 "name": null, 00:22:25.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.763 "is_configured": false, 00:22:25.763 "data_offset": 2048, 00:22:25.763 "data_size": 63488 00:22:25.763 }, 00:22:25.763 { 00:22:25.763 "name": "BaseBdev2", 00:22:25.763 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:25.763 "is_configured": true, 00:22:25.763 "data_offset": 2048, 00:22:25.763 "data_size": 63488 00:22:25.763 } 00:22:25.763 ] 00:22:25.763 }' 00:22:25.763 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:25.763 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:25.763 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:25.763 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:25.763 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:26.023 12:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:26.282 [2024-07-25 12:04:12.185072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:26.282 [2024-07-25 12:04:12.185115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.282 [2024-07-25 12:04:12.185132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x236d6e0 00:22:26.282 [2024-07-25 12:04:12.185149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.282 [2024-07-25 12:04:12.185471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.282 [2024-07-25 12:04:12.185487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:26.282 [2024-07-25 12:04:12.185547] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:26.282 [2024-07-25 12:04:12.185558] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:26.282 [2024-07-25 12:04:12.185568] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:26.282 BaseBdev1 00:22:26.282 12:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.219 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.478 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.478 "name": "raid_bdev1", 00:22:27.478 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:27.478 "strip_size_kb": 0, 00:22:27.478 "state": "online", 00:22:27.478 "raid_level": "raid1", 00:22:27.478 "superblock": true, 00:22:27.478 "num_base_bdevs": 2, 00:22:27.478 "num_base_bdevs_discovered": 1, 00:22:27.478 "num_base_bdevs_operational": 1, 00:22:27.478 "base_bdevs_list": [ 00:22:27.478 { 00:22:27.478 "name": null, 00:22:27.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.478 "is_configured": false, 00:22:27.478 "data_offset": 2048, 00:22:27.478 "data_size": 63488 00:22:27.478 }, 00:22:27.478 { 00:22:27.478 "name": "BaseBdev2", 00:22:27.478 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:27.478 "is_configured": true, 00:22:27.478 "data_offset": 2048, 00:22:27.478 "data_size": 63488 00:22:27.478 } 00:22:27.478 ] 00:22:27.478 }' 00:22:27.478 12:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.478 12:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.046 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:28.304 "name": "raid_bdev1", 00:22:28.304 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:28.304 "strip_size_kb": 0, 00:22:28.304 "state": "online", 00:22:28.304 "raid_level": "raid1", 00:22:28.304 "superblock": true, 00:22:28.304 "num_base_bdevs": 2, 00:22:28.304 "num_base_bdevs_discovered": 1, 00:22:28.304 "num_base_bdevs_operational": 1, 00:22:28.304 "base_bdevs_list": [ 00:22:28.304 { 00:22:28.304 "name": null, 00:22:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.304 "is_configured": false, 00:22:28.304 "data_offset": 2048, 00:22:28.304 "data_size": 63488 00:22:28.304 }, 00:22:28.304 { 00:22:28.304 "name": "BaseBdev2", 00:22:28.304 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:28.304 "is_configured": true, 00:22:28.304 "data_offset": 2048, 00:22:28.304 "data_size": 63488 00:22:28.304 } 00:22:28.304 ] 00:22:28.304 }' 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:28.304 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:28.562 [2024-07-25 12:04:14.563340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.562 [2024-07-25 12:04:14.563462] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:28.562 [2024-07-25 12:04:14.563477] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:28.562 request: 00:22:28.562 { 00:22:28.562 "base_bdev": "BaseBdev1", 00:22:28.562 "raid_bdev": "raid_bdev1", 00:22:28.562 "method": "bdev_raid_add_base_bdev", 00:22:28.562 "req_id": 1 00:22:28.562 } 00:22:28.562 Got JSON-RPC error response 00:22:28.562 response: 00:22:28.562 { 00:22:28.563 "code": -22, 00:22:28.563 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:28.563 } 00:22:28.563 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:22:28.563 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.563 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.563 12:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.563 12:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:29.498 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.499 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.758 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.758 "name": "raid_bdev1", 00:22:29.758 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:29.758 "strip_size_kb": 0, 00:22:29.758 "state": "online", 00:22:29.758 "raid_level": "raid1", 00:22:29.758 "superblock": true, 00:22:29.758 "num_base_bdevs": 2, 00:22:29.758 "num_base_bdevs_discovered": 1, 00:22:29.758 "num_base_bdevs_operational": 1, 00:22:29.758 "base_bdevs_list": [ 00:22:29.758 { 00:22:29.758 "name": null, 00:22:29.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.758 "is_configured": false, 00:22:29.758 "data_offset": 2048, 00:22:29.758 "data_size": 63488 00:22:29.758 }, 00:22:29.758 { 00:22:29.758 "name": "BaseBdev2", 00:22:29.758 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:29.758 "is_configured": true, 00:22:29.758 "data_offset": 2048, 00:22:29.758 "data_size": 63488 00:22:29.758 } 00:22:29.758 ] 00:22:29.758 }' 00:22:29.758 12:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.758 12:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.325 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.325 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:30.325 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:30.325 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:30.325 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:30.326 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.326 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.584 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:30.584 "name": "raid_bdev1", 00:22:30.584 "uuid": "d136c93c-5d32-487e-a18c-f5b7188a6ac1", 00:22:30.584 "strip_size_kb": 0, 00:22:30.584 "state": "online", 00:22:30.584 "raid_level": "raid1", 00:22:30.584 "superblock": true, 00:22:30.584 "num_base_bdevs": 2, 00:22:30.584 "num_base_bdevs_discovered": 1, 00:22:30.584 "num_base_bdevs_operational": 1, 00:22:30.584 "base_bdevs_list": [ 00:22:30.584 { 00:22:30.584 "name": null, 00:22:30.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.584 "is_configured": false, 00:22:30.584 "data_offset": 2048, 00:22:30.584 "data_size": 63488 00:22:30.584 }, 00:22:30.584 { 00:22:30.584 "name": "BaseBdev2", 00:22:30.584 "uuid": "4326a613-b967-581a-99ad-295e4b8c5626", 00:22:30.584 "is_configured": true, 00:22:30.584 "data_offset": 2048, 00:22:30.584 "data_size": 63488 00:22:30.584 } 00:22:30.584 ] 00:22:30.584 }' 00:22:30.584 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:30.584 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:30.584 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 29747 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 29747 ']' 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 29747 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 29747 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 29747' 00:22:30.843 killing process with pid 29747 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 29747 00:22:30.843 Received shutdown signal, test time was about 60.000000 seconds 00:22:30.843 00:22:30.843 Latency(us) 00:22:30.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.843 =================================================================================================================== 00:22:30.843 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.843 [2024-07-25 12:04:16.781744] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:30.843 [2024-07-25 12:04:16.781828] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.843 [2024-07-25 12:04:16.781870] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.843 [2024-07-25 12:04:16.781881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22cf8b0 name raid_bdev1, state offline 00:22:30.843 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 29747 00:22:30.843 [2024-07-25 12:04:16.806533] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:31.102 12:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:22:31.102 00:22:31.102 real 0m33.651s 00:22:31.102 user 0m49.235s 00:22:31.102 sys 0m6.148s 00:22:31.102 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.102 12:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.102 ************************************ 00:22:31.102 END TEST raid_rebuild_test_sb 00:22:31.102 ************************************ 00:22:31.102 12:04:17 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:22:31.102 12:04:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:31.102 12:04:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.102 12:04:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:31.102 ************************************ 00:22:31.102 START TEST raid_rebuild_test_io 00:22:31.102 ************************************ 00:22:31.102 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:22:31.102 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:31.102 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:31.102 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:22:31.102 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=35776 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 35776 /var/tmp/spdk-raid.sock 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 35776 ']' 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:31.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.103 12:04:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:31.103 [2024-07-25 12:04:17.151700] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:31.103 [2024-07-25 12:04:17.151758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35776 ] 00:22:31.103 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:31.103 Zero copy mechanism will not be used. 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:31.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.362 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:31.362 [2024-07-25 12:04:17.286029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.362 [2024-07-25 12:04:17.367423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.362 [2024-07-25 12:04:17.426936] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:31.362 [2024-07-25 12:04:17.426978] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:32.298 12:04:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.298 12:04:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:22:32.298 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:32.298 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:32.298 BaseBdev1_malloc 00:22:32.298 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:32.557 [2024-07-25 12:04:18.496632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:32.557 [2024-07-25 12:04:18.496679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.557 [2024-07-25 12:04:18.496698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ed5f0 00:22:32.557 [2024-07-25 12:04:18.496710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.557 [2024-07-25 12:04:18.498135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.557 [2024-07-25 12:04:18.498169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:32.557 BaseBdev1 00:22:32.557 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:32.557 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:32.816 BaseBdev2_malloc 00:22:32.816 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:33.074 [2024-07-25 12:04:18.946020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:33.074 [2024-07-25 12:04:18.946056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.074 [2024-07-25 12:04:18.946072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2491130 00:22:33.074 [2024-07-25 12:04:18.946083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.074 [2024-07-25 12:04:18.947388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.074 [2024-07-25 12:04:18.947413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:33.074 BaseBdev2 00:22:33.074 12:04:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:33.074 spare_malloc 00:22:33.333 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:33.333 spare_delay 00:22:33.333 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:33.592 [2024-07-25 12:04:19.619959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:33.592 [2024-07-25 12:04:19.619996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.592 [2024-07-25 12:04:19.620013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2490770 00:22:33.592 [2024-07-25 12:04:19.620024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.592 [2024-07-25 12:04:19.621283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.592 [2024-07-25 12:04:19.621308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:33.592 spare 00:22:33.592 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:33.850 [2024-07-25 12:04:19.848623] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:33.850 [2024-07-25 12:04:19.849697] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:33.850 [2024-07-25 12:04:19.849761] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x22e5270 00:22:33.850 [2024-07-25 12:04:19.849771] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:33.850 [2024-07-25 12:04:19.849937] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24913c0 00:22:33.850 [2024-07-25 12:04:19.850065] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22e5270 00:22:33.850 [2024-07-25 12:04:19.850074] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22e5270 00:22:33.850 [2024-07-25 12:04:19.850179] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.850 12:04:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.109 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.109 "name": "raid_bdev1", 00:22:34.109 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:34.109 "strip_size_kb": 0, 00:22:34.109 "state": "online", 00:22:34.109 "raid_level": "raid1", 00:22:34.109 "superblock": false, 00:22:34.109 "num_base_bdevs": 2, 00:22:34.109 "num_base_bdevs_discovered": 2, 00:22:34.109 "num_base_bdevs_operational": 2, 00:22:34.109 "base_bdevs_list": [ 00:22:34.109 { 00:22:34.109 "name": "BaseBdev1", 00:22:34.109 "uuid": "f23a02ff-ae42-5831-90f5-e632908b8e03", 00:22:34.109 "is_configured": true, 00:22:34.109 "data_offset": 0, 00:22:34.109 "data_size": 65536 00:22:34.109 }, 00:22:34.109 { 00:22:34.109 "name": "BaseBdev2", 00:22:34.109 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:34.109 "is_configured": true, 00:22:34.109 "data_offset": 0, 00:22:34.109 "data_size": 65536 00:22:34.109 } 00:22:34.109 ] 00:22:34.109 }' 00:22:34.109 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.109 12:04:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:34.676 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:34.676 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:34.935 [2024-07-25 12:04:20.871522] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.935 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:22:34.935 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.935 12:04:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:35.193 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:22:35.193 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:22:35.193 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:35.193 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:35.193 [2024-07-25 12:04:21.222207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24858f0 00:22:35.193 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:35.193 Zero copy mechanism will not be used. 00:22:35.193 Running I/O for 60 seconds... 00:22:35.451 [2024-07-25 12:04:21.334689] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:35.451 [2024-07-25 12:04:21.334860] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x24858f0 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.451 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.710 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.710 "name": "raid_bdev1", 00:22:35.710 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:35.710 "strip_size_kb": 0, 00:22:35.710 "state": "online", 00:22:35.710 "raid_level": "raid1", 00:22:35.710 "superblock": false, 00:22:35.710 "num_base_bdevs": 2, 00:22:35.710 "num_base_bdevs_discovered": 1, 00:22:35.710 "num_base_bdevs_operational": 1, 00:22:35.710 "base_bdevs_list": [ 00:22:35.710 { 00:22:35.710 "name": null, 00:22:35.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.710 "is_configured": false, 00:22:35.710 "data_offset": 0, 00:22:35.710 "data_size": 65536 00:22:35.710 }, 00:22:35.710 { 00:22:35.710 "name": "BaseBdev2", 00:22:35.710 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:35.710 "is_configured": true, 00:22:35.710 "data_offset": 0, 00:22:35.710 "data_size": 65536 00:22:35.710 } 00:22:35.710 ] 00:22:35.710 }' 00:22:35.710 12:04:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.710 12:04:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:36.277 12:04:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:36.536 [2024-07-25 12:04:22.413112] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:36.536 12:04:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:36.536 [2024-07-25 12:04:22.490054] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2484980 00:22:36.536 [2024-07-25 12:04:22.492396] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:36.536 [2024-07-25 12:04:22.620429] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:36.816 [2024-07-25 12:04:22.838935] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:36.816 [2024-07-25 12:04:22.839167] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:37.398 [2024-07-25 12:04:23.307471] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:37.398 [2024-07-25 12:04:23.307660] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.398 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.656 [2024-07-25 12:04:23.611469] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:37.656 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:37.656 "name": "raid_bdev1", 00:22:37.656 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:37.656 "strip_size_kb": 0, 00:22:37.656 "state": "online", 00:22:37.656 "raid_level": "raid1", 00:22:37.656 "superblock": false, 00:22:37.656 "num_base_bdevs": 2, 00:22:37.656 "num_base_bdevs_discovered": 2, 00:22:37.656 "num_base_bdevs_operational": 2, 00:22:37.656 "process": { 00:22:37.656 "type": "rebuild", 00:22:37.656 "target": "spare", 00:22:37.656 "progress": { 00:22:37.656 "blocks": 14336, 00:22:37.656 "percent": 21 00:22:37.656 } 00:22:37.656 }, 00:22:37.656 "base_bdevs_list": [ 00:22:37.656 { 00:22:37.656 "name": "spare", 00:22:37.656 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:37.656 "is_configured": true, 00:22:37.656 "data_offset": 0, 00:22:37.656 "data_size": 65536 00:22:37.656 }, 00:22:37.656 { 00:22:37.656 "name": "BaseBdev2", 00:22:37.656 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:37.656 "is_configured": true, 00:22:37.656 "data_offset": 0, 00:22:37.656 "data_size": 65536 00:22:37.656 } 00:22:37.656 ] 00:22:37.656 }' 00:22:37.656 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:37.656 [2024-07-25 12:04:23.729284] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:37.656 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:37.656 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:37.923 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.923 12:04:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:37.923 [2024-07-25 12:04:23.972169] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:37.923 [2024-07-25 12:04:24.000277] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:38.183 [2024-07-25 12:04:24.188517] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:38.183 [2024-07-25 12:04:24.197739] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.183 [2024-07-25 12:04:24.197764] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:38.183 [2024-07-25 12:04:24.197773] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:38.183 [2024-07-25 12:04:24.210675] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x24858f0 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.183 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.442 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.442 "name": "raid_bdev1", 00:22:38.442 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:38.442 "strip_size_kb": 0, 00:22:38.442 "state": "online", 00:22:38.442 "raid_level": "raid1", 00:22:38.442 "superblock": false, 00:22:38.442 "num_base_bdevs": 2, 00:22:38.442 "num_base_bdevs_discovered": 1, 00:22:38.442 "num_base_bdevs_operational": 1, 00:22:38.442 "base_bdevs_list": [ 00:22:38.442 { 00:22:38.442 "name": null, 00:22:38.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.442 "is_configured": false, 00:22:38.442 "data_offset": 0, 00:22:38.442 "data_size": 65536 00:22:38.442 }, 00:22:38.442 { 00:22:38.442 "name": "BaseBdev2", 00:22:38.442 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:38.442 "is_configured": true, 00:22:38.442 "data_offset": 0, 00:22:38.442 "data_size": 65536 00:22:38.442 } 00:22:38.442 ] 00:22:38.442 }' 00:22:38.442 12:04:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.442 12:04:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.009 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.267 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:39.267 "name": "raid_bdev1", 00:22:39.267 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:39.267 "strip_size_kb": 0, 00:22:39.267 "state": "online", 00:22:39.267 "raid_level": "raid1", 00:22:39.267 "superblock": false, 00:22:39.267 "num_base_bdevs": 2, 00:22:39.267 "num_base_bdevs_discovered": 1, 00:22:39.267 "num_base_bdevs_operational": 1, 00:22:39.267 "base_bdevs_list": [ 00:22:39.267 { 00:22:39.267 "name": null, 00:22:39.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.268 "is_configured": false, 00:22:39.268 "data_offset": 0, 00:22:39.268 "data_size": 65536 00:22:39.268 }, 00:22:39.268 { 00:22:39.268 "name": "BaseBdev2", 00:22:39.268 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:39.268 "is_configured": true, 00:22:39.268 "data_offset": 0, 00:22:39.268 "data_size": 65536 00:22:39.268 } 00:22:39.268 ] 00:22:39.268 }' 00:22:39.268 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:39.268 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:39.268 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:39.526 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:39.526 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:39.526 [2024-07-25 12:04:25.613036] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.785 12:04:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:39.785 [2024-07-25 12:04:25.659938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2484980 00:22:39.785 [2024-07-25 12:04:25.661324] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:39.785 [2024-07-25 12:04:25.778421] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:39.785 [2024-07-25 12:04:25.778683] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:40.353 [2024-07-25 12:04:26.401451] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:40.353 [2024-07-25 12:04:26.401650] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.612 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.871 [2024-07-25 12:04:26.752723] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:40.871 "name": "raid_bdev1", 00:22:40.871 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:40.871 "strip_size_kb": 0, 00:22:40.871 "state": "online", 00:22:40.871 "raid_level": "raid1", 00:22:40.871 "superblock": false, 00:22:40.871 "num_base_bdevs": 2, 00:22:40.871 "num_base_bdevs_discovered": 2, 00:22:40.871 "num_base_bdevs_operational": 2, 00:22:40.871 "process": { 00:22:40.871 "type": "rebuild", 00:22:40.871 "target": "spare", 00:22:40.871 "progress": { 00:22:40.871 "blocks": 14336, 00:22:40.871 "percent": 21 00:22:40.871 } 00:22:40.871 }, 00:22:40.871 "base_bdevs_list": [ 00:22:40.871 { 00:22:40.871 "name": "spare", 00:22:40.871 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:40.871 "is_configured": true, 00:22:40.871 "data_offset": 0, 00:22:40.871 "data_size": 65536 00:22:40.871 }, 00:22:40.871 { 00:22:40.871 "name": "BaseBdev2", 00:22:40.871 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:40.871 "is_configured": true, 00:22:40.871 "data_offset": 0, 00:22:40.871 "data_size": 65536 00:22:40.871 } 00:22:40.871 ] 00:22:40.871 }' 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:40.871 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=771 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.872 12:04:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.872 [2024-07-25 12:04:26.980059] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:40.872 [2024-07-25 12:04:26.980275] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:41.130 12:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.130 "name": "raid_bdev1", 00:22:41.130 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:41.130 "strip_size_kb": 0, 00:22:41.130 "state": "online", 00:22:41.130 "raid_level": "raid1", 00:22:41.130 "superblock": false, 00:22:41.130 "num_base_bdevs": 2, 00:22:41.130 "num_base_bdevs_discovered": 2, 00:22:41.130 "num_base_bdevs_operational": 2, 00:22:41.130 "process": { 00:22:41.130 "type": "rebuild", 00:22:41.130 "target": "spare", 00:22:41.130 "progress": { 00:22:41.130 "blocks": 16384, 00:22:41.130 "percent": 25 00:22:41.130 } 00:22:41.130 }, 00:22:41.130 "base_bdevs_list": [ 00:22:41.130 { 00:22:41.130 "name": "spare", 00:22:41.130 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:41.130 "is_configured": true, 00:22:41.130 "data_offset": 0, 00:22:41.130 "data_size": 65536 00:22:41.130 }, 00:22:41.130 { 00:22:41.130 "name": "BaseBdev2", 00:22:41.130 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:41.130 "is_configured": true, 00:22:41.130 "data_offset": 0, 00:22:41.130 "data_size": 65536 00:22:41.130 } 00:22:41.130 ] 00:22:41.130 }' 00:22:41.130 12:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:41.130 12:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:41.389 12:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:41.389 12:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:41.389 12:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:41.389 [2024-07-25 12:04:27.457443] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:41.957 [2024-07-25 12:04:27.919799] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:42.216 [2024-07-25 12:04:28.147835] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.216 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.474 [2024-07-25 12:04:28.391118] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:42.474 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:42.474 "name": "raid_bdev1", 00:22:42.474 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:42.474 "strip_size_kb": 0, 00:22:42.474 "state": "online", 00:22:42.474 "raid_level": "raid1", 00:22:42.474 "superblock": false, 00:22:42.474 "num_base_bdevs": 2, 00:22:42.474 "num_base_bdevs_discovered": 2, 00:22:42.474 "num_base_bdevs_operational": 2, 00:22:42.474 "process": { 00:22:42.474 "type": "rebuild", 00:22:42.474 "target": "spare", 00:22:42.474 "progress": { 00:22:42.474 "blocks": 34816, 00:22:42.474 "percent": 53 00:22:42.474 } 00:22:42.475 }, 00:22:42.475 "base_bdevs_list": [ 00:22:42.475 { 00:22:42.475 "name": "spare", 00:22:42.475 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:42.475 "is_configured": true, 00:22:42.475 "data_offset": 0, 00:22:42.475 "data_size": 65536 00:22:42.475 }, 00:22:42.475 { 00:22:42.475 "name": "BaseBdev2", 00:22:42.475 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:42.475 "is_configured": true, 00:22:42.475 "data_offset": 0, 00:22:42.475 "data_size": 65536 00:22:42.475 } 00:22:42.475 ] 00:22:42.475 }' 00:22:42.475 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:42.475 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:42.475 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:42.733 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:42.733 12:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:42.733 [2024-07-25 12:04:28.734200] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.669 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.927 [2024-07-25 12:04:29.842947] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:43.927 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.927 "name": "raid_bdev1", 00:22:43.927 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:43.927 "strip_size_kb": 0, 00:22:43.927 "state": "online", 00:22:43.927 "raid_level": "raid1", 00:22:43.927 "superblock": false, 00:22:43.927 "num_base_bdevs": 2, 00:22:43.927 "num_base_bdevs_discovered": 2, 00:22:43.928 "num_base_bdevs_operational": 2, 00:22:43.928 "process": { 00:22:43.928 "type": "rebuild", 00:22:43.928 "target": "spare", 00:22:43.928 "progress": { 00:22:43.928 "blocks": 57344, 00:22:43.928 "percent": 87 00:22:43.928 } 00:22:43.928 }, 00:22:43.928 "base_bdevs_list": [ 00:22:43.928 { 00:22:43.928 "name": "spare", 00:22:43.928 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:43.928 "is_configured": true, 00:22:43.928 "data_offset": 0, 00:22:43.928 "data_size": 65536 00:22:43.928 }, 00:22:43.928 { 00:22:43.928 "name": "BaseBdev2", 00:22:43.928 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:43.928 "is_configured": true, 00:22:43.928 "data_offset": 0, 00:22:43.928 "data_size": 65536 00:22:43.928 } 00:22:43.928 ] 00:22:43.928 }' 00:22:43.928 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:43.928 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.928 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:43.928 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.928 12:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:44.186 [2024-07-25 12:04:30.164880] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:44.186 [2024-07-25 12:04:30.265706] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:44.186 [2024-07-25 12:04:30.267728] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.122 12:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.122 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.122 "name": "raid_bdev1", 00:22:45.122 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:45.122 "strip_size_kb": 0, 00:22:45.122 "state": "online", 00:22:45.122 "raid_level": "raid1", 00:22:45.122 "superblock": false, 00:22:45.122 "num_base_bdevs": 2, 00:22:45.122 "num_base_bdevs_discovered": 2, 00:22:45.122 "num_base_bdevs_operational": 2, 00:22:45.122 "base_bdevs_list": [ 00:22:45.122 { 00:22:45.122 "name": "spare", 00:22:45.122 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:45.122 "is_configured": true, 00:22:45.122 "data_offset": 0, 00:22:45.122 "data_size": 65536 00:22:45.122 }, 00:22:45.122 { 00:22:45.122 "name": "BaseBdev2", 00:22:45.122 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:45.122 "is_configured": true, 00:22:45.122 "data_offset": 0, 00:22:45.122 "data_size": 65536 00:22:45.122 } 00:22:45.122 ] 00:22:45.122 }' 00:22:45.122 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.122 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:45.122 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.380 "name": "raid_bdev1", 00:22:45.380 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:45.380 "strip_size_kb": 0, 00:22:45.380 "state": "online", 00:22:45.380 "raid_level": "raid1", 00:22:45.380 "superblock": false, 00:22:45.380 "num_base_bdevs": 2, 00:22:45.380 "num_base_bdevs_discovered": 2, 00:22:45.380 "num_base_bdevs_operational": 2, 00:22:45.380 "base_bdevs_list": [ 00:22:45.380 { 00:22:45.380 "name": "spare", 00:22:45.380 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:45.380 "is_configured": true, 00:22:45.380 "data_offset": 0, 00:22:45.380 "data_size": 65536 00:22:45.380 }, 00:22:45.380 { 00:22:45.380 "name": "BaseBdev2", 00:22:45.380 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:45.380 "is_configured": true, 00:22:45.380 "data_offset": 0, 00:22:45.380 "data_size": 65536 00:22:45.380 } 00:22:45.380 ] 00:22:45.380 }' 00:22:45.380 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:45.639 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.640 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.640 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.640 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.640 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.640 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.898 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.898 "name": "raid_bdev1", 00:22:45.898 "uuid": "32f80217-026c-482c-a251-2c5098c646c8", 00:22:45.898 "strip_size_kb": 0, 00:22:45.898 "state": "online", 00:22:45.898 "raid_level": "raid1", 00:22:45.898 "superblock": false, 00:22:45.898 "num_base_bdevs": 2, 00:22:45.898 "num_base_bdevs_discovered": 2, 00:22:45.898 "num_base_bdevs_operational": 2, 00:22:45.898 "base_bdevs_list": [ 00:22:45.898 { 00:22:45.898 "name": "spare", 00:22:45.898 "uuid": "c0c811ea-0625-5908-9ed5-331e7028a1c1", 00:22:45.898 "is_configured": true, 00:22:45.898 "data_offset": 0, 00:22:45.898 "data_size": 65536 00:22:45.898 }, 00:22:45.898 { 00:22:45.898 "name": "BaseBdev2", 00:22:45.898 "uuid": "efe12234-d310-5a98-a7d8-f3dc591ee129", 00:22:45.898 "is_configured": true, 00:22:45.898 "data_offset": 0, 00:22:45.898 "data_size": 65536 00:22:45.898 } 00:22:45.898 ] 00:22:45.898 }' 00:22:45.898 12:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.898 12:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:46.466 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:46.725 [2024-07-25 12:04:32.618628] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.725 [2024-07-25 12:04:32.618657] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.725 00:22:46.725 Latency(us) 00:22:46.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.725 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:46.725 raid_bdev1 : 11.41 99.52 298.57 0.00 0.00 13578.50 268.70 117440.51 00:22:46.725 =================================================================================================================== 00:22:46.725 Total : 99.52 298.57 0.00 0.00 13578.50 268.70 117440.51 00:22:46.725 [2024-07-25 12:04:32.670449] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.725 [2024-07-25 12:04:32.670474] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.725 [2024-07-25 12:04:32.670545] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.725 [2024-07-25 12:04:32.670556] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22e5270 name raid_bdev1, state offline 00:22:46.725 0 00:22:46.725 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.725 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.983 12:04:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:47.242 /dev/nbd0 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:47.242 1+0 records in 00:22:47.242 1+0 records out 00:22:47.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254567 s, 16.1 MB/s 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:47.242 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:47.500 /dev/nbd1 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:47.500 1+0 records in 00:22:47.500 1+0 records out 00:22:47.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247293 s, 16.6 MB/s 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:22:47.500 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:47.501 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:47.759 12:04:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 35776 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 35776 ']' 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 35776 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 35776 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 35776' 00:22:48.019 killing process with pid 35776 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 35776 00:22:48.019 Received shutdown signal, test time was about 12.826249 seconds 00:22:48.019 00:22:48.019 Latency(us) 00:22:48.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.019 =================================================================================================================== 00:22:48.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.019 [2024-07-25 12:04:34.081876] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:48.019 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 35776 00:22:48.019 [2024-07-25 12:04:34.101093] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:22:48.278 00:22:48.278 real 0m17.214s 00:22:48.278 user 0m25.953s 00:22:48.278 sys 0m2.724s 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:48.278 ************************************ 00:22:48.278 END TEST raid_rebuild_test_io 00:22:48.278 ************************************ 00:22:48.278 12:04:34 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:22:48.278 12:04:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:48.278 12:04:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:48.278 12:04:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:48.278 ************************************ 00:22:48.278 START TEST raid_rebuild_test_sb_io 00:22:48.278 ************************************ 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:48.278 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=38917 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 38917 /var/tmp/spdk-raid.sock 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 38917 ']' 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:48.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.279 12:04:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:48.537 [2024-07-25 12:04:34.448170] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:48.537 [2024-07-25 12:04:34.448234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38917 ] 00:22:48.537 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:48.537 Zero copy mechanism will not be used. 00:22:48.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.537 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:48.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.537 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:48.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.537 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:48.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.537 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:48.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.537 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:48.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:48.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.538 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:48.538 [2024-07-25 12:04:34.565950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.538 [2024-07-25 12:04:34.647813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.796 [2024-07-25 12:04:34.707744] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.796 [2024-07-25 12:04:34.707781] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:49.362 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.362 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:22:49.362 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:49.362 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:49.620 BaseBdev1_malloc 00:22:49.620 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:49.620 [2024-07-25 12:04:35.737377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:49.620 [2024-07-25 12:04:35.737425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.620 [2024-07-25 12:04:35.737446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x115b5f0 00:22:49.620 [2024-07-25 12:04:35.737458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.880 [2024-07-25 12:04:35.738916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.880 [2024-07-25 12:04:35.738943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:49.880 BaseBdev1 00:22:49.880 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:49.880 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:49.880 BaseBdev2_malloc 00:22:49.880 12:04:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:50.190 [2024-07-25 12:04:36.198938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:50.190 [2024-07-25 12:04:36.198979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.190 [2024-07-25 12:04:36.198997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ff130 00:22:50.190 [2024-07-25 12:04:36.199008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.190 [2024-07-25 12:04:36.200361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.190 [2024-07-25 12:04:36.200402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:50.190 BaseBdev2 00:22:50.190 12:04:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:50.448 spare_malloc 00:22:50.448 12:04:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:50.707 spare_delay 00:22:50.707 12:04:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:50.966 [2024-07-25 12:04:36.880948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:50.966 [2024-07-25 12:04:36.880986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.966 [2024-07-25 12:04:36.881003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12fe770 00:22:50.966 [2024-07-25 12:04:36.881015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.966 [2024-07-25 12:04:36.882282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.966 [2024-07-25 12:04:36.882306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:50.966 spare 00:22:50.966 12:04:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:50.966 [2024-07-25 12:04:37.061453] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:50.966 [2024-07-25 12:04:37.062518] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.966 [2024-07-25 12:04:37.062660] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1153270 00:22:50.966 [2024-07-25 12:04:37.062672] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:50.966 [2024-07-25 12:04:37.062826] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12ff3c0 00:22:50.966 [2024-07-25 12:04:37.062950] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1153270 00:22:50.966 [2024-07-25 12:04:37.062959] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1153270 00:22:50.966 [2024-07-25 12:04:37.063041] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.966 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.224 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.224 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.224 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.224 "name": "raid_bdev1", 00:22:51.224 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:51.224 "strip_size_kb": 0, 00:22:51.224 "state": "online", 00:22:51.224 "raid_level": "raid1", 00:22:51.224 "superblock": true, 00:22:51.224 "num_base_bdevs": 2, 00:22:51.224 "num_base_bdevs_discovered": 2, 00:22:51.224 "num_base_bdevs_operational": 2, 00:22:51.224 "base_bdevs_list": [ 00:22:51.224 { 00:22:51.224 "name": "BaseBdev1", 00:22:51.224 "uuid": "f9b21439-24d5-5a9a-a513-789f61405783", 00:22:51.224 "is_configured": true, 00:22:51.224 "data_offset": 2048, 00:22:51.224 "data_size": 63488 00:22:51.224 }, 00:22:51.224 { 00:22:51.224 "name": "BaseBdev2", 00:22:51.224 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:51.224 "is_configured": true, 00:22:51.224 "data_offset": 2048, 00:22:51.224 "data_size": 63488 00:22:51.224 } 00:22:51.224 ] 00:22:51.224 }' 00:22:51.224 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.224 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:51.792 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:51.792 12:04:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:52.051 [2024-07-25 12:04:38.084374] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.051 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:22:52.051 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.051 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:52.310 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:22:52.310 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:22:52.310 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:52.310 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:52.569 [2024-07-25 12:04:38.435086] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12f4620 00:22:52.569 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:52.569 Zero copy mechanism will not be used. 00:22:52.569 Running I/O for 60 seconds... 00:22:52.569 [2024-07-25 12:04:38.541114] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.569 [2024-07-25 12:04:38.548652] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x12f4620 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.569 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.828 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.828 "name": "raid_bdev1", 00:22:52.828 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:52.828 "strip_size_kb": 0, 00:22:52.828 "state": "online", 00:22:52.828 "raid_level": "raid1", 00:22:52.828 "superblock": true, 00:22:52.828 "num_base_bdevs": 2, 00:22:52.828 "num_base_bdevs_discovered": 1, 00:22:52.828 "num_base_bdevs_operational": 1, 00:22:52.828 "base_bdevs_list": [ 00:22:52.828 { 00:22:52.828 "name": null, 00:22:52.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.828 "is_configured": false, 00:22:52.828 "data_offset": 2048, 00:22:52.828 "data_size": 63488 00:22:52.828 }, 00:22:52.828 { 00:22:52.828 "name": "BaseBdev2", 00:22:52.828 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:52.828 "is_configured": true, 00:22:52.828 "data_offset": 2048, 00:22:52.828 "data_size": 63488 00:22:52.828 } 00:22:52.828 ] 00:22:52.828 }' 00:22:52.828 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.828 12:04:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:53.395 12:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.654 [2024-07-25 12:04:39.526907] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.654 12:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:53.654 [2024-07-25 12:04:39.580827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12f2a60 00:22:53.654 [2024-07-25 12:04:39.583183] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.654 [2024-07-25 12:04:39.694946] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:53.654 [2024-07-25 12:04:39.695174] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:53.913 [2024-07-25 12:04:39.905337] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:53.913 [2024-07-25 12:04:39.905496] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:54.172 [2024-07-25 12:04:40.266824] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.741 "name": "raid_bdev1", 00:22:54.741 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:54.741 "strip_size_kb": 0, 00:22:54.741 "state": "online", 00:22:54.741 "raid_level": "raid1", 00:22:54.741 "superblock": true, 00:22:54.741 "num_base_bdevs": 2, 00:22:54.741 "num_base_bdevs_discovered": 2, 00:22:54.741 "num_base_bdevs_operational": 2, 00:22:54.741 "process": { 00:22:54.741 "type": "rebuild", 00:22:54.741 "target": "spare", 00:22:54.741 "progress": { 00:22:54.741 "blocks": 14336, 00:22:54.741 "percent": 22 00:22:54.741 } 00:22:54.741 }, 00:22:54.741 "base_bdevs_list": [ 00:22:54.741 { 00:22:54.741 "name": "spare", 00:22:54.741 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:22:54.741 "is_configured": true, 00:22:54.741 "data_offset": 2048, 00:22:54.741 "data_size": 63488 00:22:54.741 }, 00:22:54.741 { 00:22:54.741 "name": "BaseBdev2", 00:22:54.741 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:54.741 "is_configured": true, 00:22:54.741 "data_offset": 2048, 00:22:54.741 "data_size": 63488 00:22:54.741 } 00:22:54.741 ] 00:22:54.741 }' 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.741 12:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:55.001 [2024-07-25 12:04:41.025722] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:55.001 [2024-07-25 12:04:41.026113] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:55.001 [2024-07-25 12:04:41.034925] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.261 [2024-07-25 12:04:41.135636] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:55.261 [2024-07-25 12:04:41.244096] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:55.261 [2024-07-25 12:04:41.252965] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.261 [2024-07-25 12:04:41.252989] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.261 [2024-07-25 12:04:41.252999] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:55.261 [2024-07-25 12:04:41.266274] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x12f4620 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.261 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.521 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.521 "name": "raid_bdev1", 00:22:55.521 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:55.521 "strip_size_kb": 0, 00:22:55.521 "state": "online", 00:22:55.521 "raid_level": "raid1", 00:22:55.521 "superblock": true, 00:22:55.521 "num_base_bdevs": 2, 00:22:55.521 "num_base_bdevs_discovered": 1, 00:22:55.521 "num_base_bdevs_operational": 1, 00:22:55.521 "base_bdevs_list": [ 00:22:55.521 { 00:22:55.521 "name": null, 00:22:55.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.521 "is_configured": false, 00:22:55.521 "data_offset": 2048, 00:22:55.521 "data_size": 63488 00:22:55.521 }, 00:22:55.521 { 00:22:55.521 "name": "BaseBdev2", 00:22:55.521 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:55.521 "is_configured": true, 00:22:55.521 "data_offset": 2048, 00:22:55.521 "data_size": 63488 00:22:55.521 } 00:22:55.521 ] 00:22:55.521 }' 00:22:55.521 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.521 12:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.089 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.349 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:56.349 "name": "raid_bdev1", 00:22:56.349 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:56.349 "strip_size_kb": 0, 00:22:56.349 "state": "online", 00:22:56.349 "raid_level": "raid1", 00:22:56.349 "superblock": true, 00:22:56.349 "num_base_bdevs": 2, 00:22:56.349 "num_base_bdevs_discovered": 1, 00:22:56.349 "num_base_bdevs_operational": 1, 00:22:56.349 "base_bdevs_list": [ 00:22:56.349 { 00:22:56.349 "name": null, 00:22:56.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.349 "is_configured": false, 00:22:56.349 "data_offset": 2048, 00:22:56.349 "data_size": 63488 00:22:56.349 }, 00:22:56.349 { 00:22:56.349 "name": "BaseBdev2", 00:22:56.349 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:56.349 "is_configured": true, 00:22:56.349 "data_offset": 2048, 00:22:56.349 "data_size": 63488 00:22:56.349 } 00:22:56.349 ] 00:22:56.349 }' 00:22:56.349 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:56.349 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:56.349 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:56.349 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:56.349 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.608 [2024-07-25 12:04:42.655665] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.608 [2024-07-25 12:04:42.709910] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12f61e0 00:22:56.608 12:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:56.608 [2024-07-25 12:04:42.711310] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.867 [2024-07-25 12:04:42.820385] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:56.868 [2024-07-25 12:04:42.820680] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:56.868 [2024-07-25 12:04:42.937900] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:56.868 [2024-07-25 12:04:42.938036] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:57.436 [2024-07-25 12:04:43.311596] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:57.436 [2024-07-25 12:04:43.544660] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:57.436 [2024-07-25 12:04:43.544849] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.695 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.954 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.954 "name": "raid_bdev1", 00:22:57.955 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:57.955 "strip_size_kb": 0, 00:22:57.955 "state": "online", 00:22:57.955 "raid_level": "raid1", 00:22:57.955 "superblock": true, 00:22:57.955 "num_base_bdevs": 2, 00:22:57.955 "num_base_bdevs_discovered": 2, 00:22:57.955 "num_base_bdevs_operational": 2, 00:22:57.955 "process": { 00:22:57.955 "type": "rebuild", 00:22:57.955 "target": "spare", 00:22:57.955 "progress": { 00:22:57.955 "blocks": 14336, 00:22:57.955 "percent": 22 00:22:57.955 } 00:22:57.955 }, 00:22:57.955 "base_bdevs_list": [ 00:22:57.955 { 00:22:57.955 "name": "spare", 00:22:57.955 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:22:57.955 "is_configured": true, 00:22:57.955 "data_offset": 2048, 00:22:57.955 "data_size": 63488 00:22:57.955 }, 00:22:57.955 { 00:22:57.955 "name": "BaseBdev2", 00:22:57.955 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:57.955 "is_configured": true, 00:22:57.955 "data_offset": 2048, 00:22:57.955 "data_size": 63488 00:22:57.955 } 00:22:57.955 ] 00:22:57.955 }' 00:22:57.955 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:57.955 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.955 12:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:57.955 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=789 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.955 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.214 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.214 "name": "raid_bdev1", 00:22:58.214 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:58.214 "strip_size_kb": 0, 00:22:58.214 "state": "online", 00:22:58.214 "raid_level": "raid1", 00:22:58.214 "superblock": true, 00:22:58.214 "num_base_bdevs": 2, 00:22:58.214 "num_base_bdevs_discovered": 2, 00:22:58.214 "num_base_bdevs_operational": 2, 00:22:58.214 "process": { 00:22:58.214 "type": "rebuild", 00:22:58.214 "target": "spare", 00:22:58.214 "progress": { 00:22:58.214 "blocks": 20480, 00:22:58.214 "percent": 32 00:22:58.214 } 00:22:58.214 }, 00:22:58.214 "base_bdevs_list": [ 00:22:58.214 { 00:22:58.214 "name": "spare", 00:22:58.214 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:22:58.214 "is_configured": true, 00:22:58.214 "data_offset": 2048, 00:22:58.214 "data_size": 63488 00:22:58.214 }, 00:22:58.214 { 00:22:58.214 "name": "BaseBdev2", 00:22:58.214 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:58.214 "is_configured": true, 00:22:58.214 "data_offset": 2048, 00:22:58.214 "data_size": 63488 00:22:58.214 } 00:22:58.214 ] 00:22:58.214 }' 00:22:58.214 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.214 [2024-07-25 12:04:44.299427] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:58.214 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.214 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.474 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.474 12:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:58.733 [2024-07-25 12:04:44.625211] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:58.733 [2024-07-25 12:04:44.742187] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:58.992 [2024-07-25 12:04:45.008583] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:59.251 [2024-07-25 12:04:45.242316] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.251 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.510 [2024-07-25 12:04:45.562360] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:59.510 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.510 "name": "raid_bdev1", 00:22:59.510 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:22:59.510 "strip_size_kb": 0, 00:22:59.510 "state": "online", 00:22:59.510 "raid_level": "raid1", 00:22:59.510 "superblock": true, 00:22:59.510 "num_base_bdevs": 2, 00:22:59.510 "num_base_bdevs_discovered": 2, 00:22:59.510 "num_base_bdevs_operational": 2, 00:22:59.510 "process": { 00:22:59.510 "type": "rebuild", 00:22:59.510 "target": "spare", 00:22:59.510 "progress": { 00:22:59.510 "blocks": 38912, 00:22:59.510 "percent": 61 00:22:59.510 } 00:22:59.510 }, 00:22:59.510 "base_bdevs_list": [ 00:22:59.510 { 00:22:59.510 "name": "spare", 00:22:59.510 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:22:59.510 "is_configured": true, 00:22:59.510 "data_offset": 2048, 00:22:59.510 "data_size": 63488 00:22:59.510 }, 00:22:59.510 { 00:22:59.510 "name": "BaseBdev2", 00:22:59.510 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:22:59.510 "is_configured": true, 00:22:59.511 "data_offset": 2048, 00:22:59.511 "data_size": 63488 00:22:59.511 } 00:22:59.511 ] 00:22:59.511 }' 00:22:59.511 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:59.770 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.770 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:59.770 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.770 12:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:59.770 [2024-07-25 12:04:45.789598] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:00.028 [2024-07-25 12:04:46.031612] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:00.596 [2024-07-25 12:04:46.482928] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:00.596 [2024-07-25 12:04:46.483526] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:00.596 [2024-07-25 12:04:46.686294] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.596 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.855 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.855 "name": "raid_bdev1", 00:23:00.855 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:00.855 "strip_size_kb": 0, 00:23:00.855 "state": "online", 00:23:00.855 "raid_level": "raid1", 00:23:00.855 "superblock": true, 00:23:00.855 "num_base_bdevs": 2, 00:23:00.855 "num_base_bdevs_discovered": 2, 00:23:00.855 "num_base_bdevs_operational": 2, 00:23:00.855 "process": { 00:23:00.855 "type": "rebuild", 00:23:00.855 "target": "spare", 00:23:00.855 "progress": { 00:23:00.855 "blocks": 55296, 00:23:00.855 "percent": 87 00:23:00.855 } 00:23:00.855 }, 00:23:00.855 "base_bdevs_list": [ 00:23:00.855 { 00:23:00.855 "name": "spare", 00:23:00.855 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:00.855 "is_configured": true, 00:23:00.855 "data_offset": 2048, 00:23:00.855 "data_size": 63488 00:23:00.855 }, 00:23:00.855 { 00:23:00.855 "name": "BaseBdev2", 00:23:00.855 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:00.855 "is_configured": true, 00:23:00.855 "data_offset": 2048, 00:23:00.855 "data_size": 63488 00:23:00.855 } 00:23:00.855 ] 00:23:00.855 }' 00:23:00.855 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:01.155 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.155 12:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:01.155 12:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.155 12:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:01.155 [2024-07-25 12:04:47.250025] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:01.413 [2024-07-25 12:04:47.350257] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:01.413 [2024-07-25 12:04:47.351302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.981 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:02.240 "name": "raid_bdev1", 00:23:02.240 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:02.240 "strip_size_kb": 0, 00:23:02.240 "state": "online", 00:23:02.240 "raid_level": "raid1", 00:23:02.240 "superblock": true, 00:23:02.240 "num_base_bdevs": 2, 00:23:02.240 "num_base_bdevs_discovered": 2, 00:23:02.240 "num_base_bdevs_operational": 2, 00:23:02.240 "base_bdevs_list": [ 00:23:02.240 { 00:23:02.240 "name": "spare", 00:23:02.240 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:02.240 "is_configured": true, 00:23:02.240 "data_offset": 2048, 00:23:02.240 "data_size": 63488 00:23:02.240 }, 00:23:02.240 { 00:23:02.240 "name": "BaseBdev2", 00:23:02.240 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:02.240 "is_configured": true, 00:23:02.240 "data_offset": 2048, 00:23:02.240 "data_size": 63488 00:23:02.240 } 00:23:02.240 ] 00:23:02.240 }' 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.240 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.499 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:02.499 "name": "raid_bdev1", 00:23:02.499 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:02.499 "strip_size_kb": 0, 00:23:02.499 "state": "online", 00:23:02.499 "raid_level": "raid1", 00:23:02.499 "superblock": true, 00:23:02.499 "num_base_bdevs": 2, 00:23:02.499 "num_base_bdevs_discovered": 2, 00:23:02.499 "num_base_bdevs_operational": 2, 00:23:02.499 "base_bdevs_list": [ 00:23:02.499 { 00:23:02.499 "name": "spare", 00:23:02.499 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:02.499 "is_configured": true, 00:23:02.499 "data_offset": 2048, 00:23:02.499 "data_size": 63488 00:23:02.499 }, 00:23:02.499 { 00:23:02.499 "name": "BaseBdev2", 00:23:02.499 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:02.499 "is_configured": true, 00:23:02.499 "data_offset": 2048, 00:23:02.499 "data_size": 63488 00:23:02.499 } 00:23:02.499 ] 00:23:02.499 }' 00:23:02.499 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.757 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.015 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.015 "name": "raid_bdev1", 00:23:03.015 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:03.015 "strip_size_kb": 0, 00:23:03.015 "state": "online", 00:23:03.015 "raid_level": "raid1", 00:23:03.015 "superblock": true, 00:23:03.015 "num_base_bdevs": 2, 00:23:03.015 "num_base_bdevs_discovered": 2, 00:23:03.015 "num_base_bdevs_operational": 2, 00:23:03.015 "base_bdevs_list": [ 00:23:03.015 { 00:23:03.015 "name": "spare", 00:23:03.015 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:03.015 "is_configured": true, 00:23:03.015 "data_offset": 2048, 00:23:03.015 "data_size": 63488 00:23:03.015 }, 00:23:03.015 { 00:23:03.015 "name": "BaseBdev2", 00:23:03.015 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:03.015 "is_configured": true, 00:23:03.015 "data_offset": 2048, 00:23:03.015 "data_size": 63488 00:23:03.015 } 00:23:03.015 ] 00:23:03.015 }' 00:23:03.015 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.015 12:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:03.583 12:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:03.583 [2024-07-25 12:04:49.661774] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.583 [2024-07-25 12:04:49.661803] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:03.842 00:23:03.842 Latency(us) 00:23:03.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.842 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:03.842 raid_bdev1 : 11.30 102.33 306.98 0.00 0.00 13434.78 273.61 117440.51 00:23:03.842 =================================================================================================================== 00:23:03.842 Total : 102.33 306.98 0.00 0.00 13434.78 273.61 117440.51 00:23:03.842 [2024-07-25 12:04:49.765697] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.842 [2024-07-25 12:04:49.765725] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.842 [2024-07-25 12:04:49.765793] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.842 [2024-07-25 12:04:49.765805] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1153270 name raid_bdev1, state offline 00:23:03.842 0 00:23:03.842 12:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.842 12:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.101 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:04.360 /dev/nbd0 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.360 1+0 records in 00:23:04.360 1+0 records out 00:23:04.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025626 s, 16.0 MB/s 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.360 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:04.620 /dev/nbd1 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.620 1+0 records in 00:23:04.620 1+0 records out 00:23:04.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290017 s, 14.1 MB/s 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.620 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.879 12:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:05.138 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:05.397 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:05.656 [2024-07-25 12:04:51.530766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.656 [2024-07-25 12:04:51.530810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.656 [2024-07-25 12:04:51.530829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11540a0 00:23:05.656 [2024-07-25 12:04:51.530841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.656 [2024-07-25 12:04:51.532371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.656 [2024-07-25 12:04:51.532399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.656 [2024-07-25 12:04:51.532474] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:05.656 [2024-07-25 12:04:51.532500] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.656 [2024-07-25 12:04:51.532598] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.656 spare 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.656 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.656 [2024-07-25 12:04:51.632909] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x115a7e0 00:23:05.656 [2024-07-25 12:04:51.632930] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:05.656 [2024-07-25 12:04:51.633115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x130dcc0 00:23:05.656 [2024-07-25 12:04:51.633264] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x115a7e0 00:23:05.656 [2024-07-25 12:04:51.633274] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x115a7e0 00:23:05.656 [2024-07-25 12:04:51.633382] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.915 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.915 "name": "raid_bdev1", 00:23:05.915 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:05.915 "strip_size_kb": 0, 00:23:05.915 "state": "online", 00:23:05.915 "raid_level": "raid1", 00:23:05.915 "superblock": true, 00:23:05.915 "num_base_bdevs": 2, 00:23:05.915 "num_base_bdevs_discovered": 2, 00:23:05.915 "num_base_bdevs_operational": 2, 00:23:05.915 "base_bdevs_list": [ 00:23:05.915 { 00:23:05.915 "name": "spare", 00:23:05.915 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:05.915 "is_configured": true, 00:23:05.915 "data_offset": 2048, 00:23:05.915 "data_size": 63488 00:23:05.915 }, 00:23:05.915 { 00:23:05.915 "name": "BaseBdev2", 00:23:05.915 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:05.915 "is_configured": true, 00:23:05.915 "data_offset": 2048, 00:23:05.915 "data_size": 63488 00:23:05.915 } 00:23:05.915 ] 00:23:05.915 }' 00:23:05.915 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.915 12:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:06.483 "name": "raid_bdev1", 00:23:06.483 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:06.483 "strip_size_kb": 0, 00:23:06.483 "state": "online", 00:23:06.483 "raid_level": "raid1", 00:23:06.483 "superblock": true, 00:23:06.483 "num_base_bdevs": 2, 00:23:06.483 "num_base_bdevs_discovered": 2, 00:23:06.483 "num_base_bdevs_operational": 2, 00:23:06.483 "base_bdevs_list": [ 00:23:06.483 { 00:23:06.483 "name": "spare", 00:23:06.483 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:06.483 "is_configured": true, 00:23:06.483 "data_offset": 2048, 00:23:06.483 "data_size": 63488 00:23:06.483 }, 00:23:06.483 { 00:23:06.483 "name": "BaseBdev2", 00:23:06.483 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:06.483 "is_configured": true, 00:23:06.483 "data_offset": 2048, 00:23:06.483 "data_size": 63488 00:23:06.483 } 00:23:06.483 ] 00:23:06.483 }' 00:23:06.483 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:06.742 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:06.742 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:06.742 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:06.742 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.742 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:07.000 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.000 12:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:07.259 [2024-07-25 12:04:53.139287] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.259 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.518 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.518 "name": "raid_bdev1", 00:23:07.518 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:07.518 "strip_size_kb": 0, 00:23:07.518 "state": "online", 00:23:07.518 "raid_level": "raid1", 00:23:07.518 "superblock": true, 00:23:07.518 "num_base_bdevs": 2, 00:23:07.518 "num_base_bdevs_discovered": 1, 00:23:07.518 "num_base_bdevs_operational": 1, 00:23:07.518 "base_bdevs_list": [ 00:23:07.518 { 00:23:07.518 "name": null, 00:23:07.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.518 "is_configured": false, 00:23:07.518 "data_offset": 2048, 00:23:07.518 "data_size": 63488 00:23:07.518 }, 00:23:07.518 { 00:23:07.518 "name": "BaseBdev2", 00:23:07.518 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:07.518 "is_configured": true, 00:23:07.518 "data_offset": 2048, 00:23:07.518 "data_size": 63488 00:23:07.518 } 00:23:07.518 ] 00:23:07.518 }' 00:23:07.518 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.518 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 12:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:08.085 [2024-07-25 12:04:54.162125] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.085 [2024-07-25 12:04:54.162271] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:08.085 [2024-07-25 12:04:54.162287] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:08.085 [2024-07-25 12:04:54.162314] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.085 [2024-07-25 12:04:54.167328] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12ff3c0 00:23:08.085 [2024-07-25 12:04:54.169477] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:08.085 12:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.463 "name": "raid_bdev1", 00:23:09.463 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:09.463 "strip_size_kb": 0, 00:23:09.463 "state": "online", 00:23:09.463 "raid_level": "raid1", 00:23:09.463 "superblock": true, 00:23:09.463 "num_base_bdevs": 2, 00:23:09.463 "num_base_bdevs_discovered": 2, 00:23:09.463 "num_base_bdevs_operational": 2, 00:23:09.463 "process": { 00:23:09.463 "type": "rebuild", 00:23:09.463 "target": "spare", 00:23:09.463 "progress": { 00:23:09.463 "blocks": 24576, 00:23:09.463 "percent": 38 00:23:09.463 } 00:23:09.463 }, 00:23:09.463 "base_bdevs_list": [ 00:23:09.463 { 00:23:09.463 "name": "spare", 00:23:09.463 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:09.463 "is_configured": true, 00:23:09.463 "data_offset": 2048, 00:23:09.463 "data_size": 63488 00:23:09.463 }, 00:23:09.463 { 00:23:09.463 "name": "BaseBdev2", 00:23:09.463 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:09.463 "is_configured": true, 00:23:09.463 "data_offset": 2048, 00:23:09.463 "data_size": 63488 00:23:09.463 } 00:23:09.463 ] 00:23:09.463 }' 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.463 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:09.722 [2024-07-25 12:04:55.720713] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:09.722 [2024-07-25 12:04:55.781244] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:09.722 [2024-07-25 12:04:55.781291] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.722 [2024-07-25 12:04:55.781306] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:09.722 [2024-07-25 12:04:55.781314] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.722 12:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.981 12:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.981 "name": "raid_bdev1", 00:23:09.981 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:09.981 "strip_size_kb": 0, 00:23:09.981 "state": "online", 00:23:09.981 "raid_level": "raid1", 00:23:09.981 "superblock": true, 00:23:09.981 "num_base_bdevs": 2, 00:23:09.981 "num_base_bdevs_discovered": 1, 00:23:09.981 "num_base_bdevs_operational": 1, 00:23:09.981 "base_bdevs_list": [ 00:23:09.981 { 00:23:09.981 "name": null, 00:23:09.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.981 "is_configured": false, 00:23:09.981 "data_offset": 2048, 00:23:09.981 "data_size": 63488 00:23:09.981 }, 00:23:09.981 { 00:23:09.981 "name": "BaseBdev2", 00:23:09.981 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:09.981 "is_configured": true, 00:23:09.981 "data_offset": 2048, 00:23:09.981 "data_size": 63488 00:23:09.981 } 00:23:09.981 ] 00:23:09.981 }' 00:23:09.981 12:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.981 12:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:10.548 12:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:10.807 [2024-07-25 12:04:56.812622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.807 [2024-07-25 12:04:56.812670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.807 [2024-07-25 12:04:56.812689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1155880 00:23:10.807 [2024-07-25 12:04:56.812701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.807 [2024-07-25 12:04:56.813047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.807 [2024-07-25 12:04:56.813064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.807 [2024-07-25 12:04:56.813145] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:10.807 [2024-07-25 12:04:56.813157] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:10.807 [2024-07-25 12:04:56.813167] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:10.807 [2024-07-25 12:04:56.813184] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.807 [2024-07-25 12:04:56.818194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12f62b0 00:23:10.807 spare 00:23:10.807 [2024-07-25 12:04:56.819468] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:10.807 12:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:11.743 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.743 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:11.743 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:11.743 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:11.743 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:11.744 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.744 12:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.002 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:12.002 "name": "raid_bdev1", 00:23:12.002 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:12.002 "strip_size_kb": 0, 00:23:12.002 "state": "online", 00:23:12.002 "raid_level": "raid1", 00:23:12.002 "superblock": true, 00:23:12.002 "num_base_bdevs": 2, 00:23:12.002 "num_base_bdevs_discovered": 2, 00:23:12.002 "num_base_bdevs_operational": 2, 00:23:12.002 "process": { 00:23:12.002 "type": "rebuild", 00:23:12.002 "target": "spare", 00:23:12.002 "progress": { 00:23:12.002 "blocks": 24576, 00:23:12.002 "percent": 38 00:23:12.002 } 00:23:12.002 }, 00:23:12.002 "base_bdevs_list": [ 00:23:12.002 { 00:23:12.002 "name": "spare", 00:23:12.002 "uuid": "19440184-e2d8-5410-95b7-65b9b2ef662f", 00:23:12.002 "is_configured": true, 00:23:12.002 "data_offset": 2048, 00:23:12.002 "data_size": 63488 00:23:12.002 }, 00:23:12.002 { 00:23:12.002 "name": "BaseBdev2", 00:23:12.002 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:12.002 "is_configured": true, 00:23:12.002 "data_offset": 2048, 00:23:12.002 "data_size": 63488 00:23:12.002 } 00:23:12.002 ] 00:23:12.002 }' 00:23:12.002 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:12.002 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:12.002 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:12.261 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:12.261 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:12.261 [2024-07-25 12:04:58.366629] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:12.520 [2024-07-25 12:04:58.431184] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:12.520 [2024-07-25 12:04:58.431231] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.520 [2024-07-25 12:04:58.431245] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:12.520 [2024-07-25 12:04:58.431253] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.520 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.779 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.779 "name": "raid_bdev1", 00:23:12.779 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:12.779 "strip_size_kb": 0, 00:23:12.779 "state": "online", 00:23:12.779 "raid_level": "raid1", 00:23:12.779 "superblock": true, 00:23:12.779 "num_base_bdevs": 2, 00:23:12.779 "num_base_bdevs_discovered": 1, 00:23:12.779 "num_base_bdevs_operational": 1, 00:23:12.779 "base_bdevs_list": [ 00:23:12.779 { 00:23:12.779 "name": null, 00:23:12.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.779 "is_configured": false, 00:23:12.779 "data_offset": 2048, 00:23:12.779 "data_size": 63488 00:23:12.779 }, 00:23:12.779 { 00:23:12.779 "name": "BaseBdev2", 00:23:12.779 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:12.779 "is_configured": true, 00:23:12.779 "data_offset": 2048, 00:23:12.779 "data_size": 63488 00:23:12.779 } 00:23:12.779 ] 00:23:12.779 }' 00:23:12.779 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.779 12:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:13.346 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.346 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:13.346 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:13.346 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:13.346 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:13.346 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.347 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.606 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:13.606 "name": "raid_bdev1", 00:23:13.606 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:13.606 "strip_size_kb": 0, 00:23:13.606 "state": "online", 00:23:13.606 "raid_level": "raid1", 00:23:13.606 "superblock": true, 00:23:13.606 "num_base_bdevs": 2, 00:23:13.606 "num_base_bdevs_discovered": 1, 00:23:13.606 "num_base_bdevs_operational": 1, 00:23:13.606 "base_bdevs_list": [ 00:23:13.606 { 00:23:13.606 "name": null, 00:23:13.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.606 "is_configured": false, 00:23:13.606 "data_offset": 2048, 00:23:13.606 "data_size": 63488 00:23:13.606 }, 00:23:13.606 { 00:23:13.606 "name": "BaseBdev2", 00:23:13.606 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:13.606 "is_configured": true, 00:23:13.606 "data_offset": 2048, 00:23:13.606 "data_size": 63488 00:23:13.606 } 00:23:13.606 ] 00:23:13.606 }' 00:23:13.606 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:13.606 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:13.606 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:13.606 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:13.606 12:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:14.173 12:05:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:14.432 [2024-07-25 12:05:00.301177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:14.432 [2024-07-25 12:05:00.301222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.432 [2024-07-25 12:05:00.301242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x115ab60 00:23:14.432 [2024-07-25 12:05:00.301253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.432 [2024-07-25 12:05:00.301579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.432 [2024-07-25 12:05:00.301595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:14.432 [2024-07-25 12:05:00.301657] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:14.432 [2024-07-25 12:05:00.301669] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:14.432 [2024-07-25 12:05:00.301679] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:14.432 BaseBdev1 00:23:14.432 12:05:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.369 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.628 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.628 "name": "raid_bdev1", 00:23:15.628 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:15.628 "strip_size_kb": 0, 00:23:15.628 "state": "online", 00:23:15.628 "raid_level": "raid1", 00:23:15.628 "superblock": true, 00:23:15.628 "num_base_bdevs": 2, 00:23:15.628 "num_base_bdevs_discovered": 1, 00:23:15.628 "num_base_bdevs_operational": 1, 00:23:15.628 "base_bdevs_list": [ 00:23:15.628 { 00:23:15.628 "name": null, 00:23:15.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.628 "is_configured": false, 00:23:15.628 "data_offset": 2048, 00:23:15.628 "data_size": 63488 00:23:15.628 }, 00:23:15.628 { 00:23:15.628 "name": "BaseBdev2", 00:23:15.628 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:15.628 "is_configured": true, 00:23:15.628 "data_offset": 2048, 00:23:15.628 "data_size": 63488 00:23:15.628 } 00:23:15.628 ] 00:23:15.628 }' 00:23:15.628 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.628 12:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.194 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.452 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:16.452 "name": "raid_bdev1", 00:23:16.453 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:16.453 "strip_size_kb": 0, 00:23:16.453 "state": "online", 00:23:16.453 "raid_level": "raid1", 00:23:16.453 "superblock": true, 00:23:16.453 "num_base_bdevs": 2, 00:23:16.453 "num_base_bdevs_discovered": 1, 00:23:16.453 "num_base_bdevs_operational": 1, 00:23:16.453 "base_bdevs_list": [ 00:23:16.453 { 00:23:16.453 "name": null, 00:23:16.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.453 "is_configured": false, 00:23:16.453 "data_offset": 2048, 00:23:16.453 "data_size": 63488 00:23:16.453 }, 00:23:16.453 { 00:23:16.453 "name": "BaseBdev2", 00:23:16.453 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:16.453 "is_configured": true, 00:23:16.453 "data_offset": 2048, 00:23:16.453 "data_size": 63488 00:23:16.453 } 00:23:16.453 ] 00:23:16.453 }' 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:16.453 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:16.711 [2024-07-25 12:05:02.631638] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.711 [2024-07-25 12:05:02.631750] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:16.711 [2024-07-25 12:05:02.631765] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:16.711 request: 00:23:16.711 { 00:23:16.711 "base_bdev": "BaseBdev1", 00:23:16.711 "raid_bdev": "raid_bdev1", 00:23:16.711 "method": "bdev_raid_add_base_bdev", 00:23:16.711 "req_id": 1 00:23:16.711 } 00:23:16.711 Got JSON-RPC error response 00:23:16.711 response: 00:23:16.711 { 00:23:16.711 "code": -22, 00:23:16.711 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:16.711 } 00:23:16.711 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:23:16.711 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.711 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.711 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.711 12:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.648 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.906 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.906 "name": "raid_bdev1", 00:23:17.906 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:17.906 "strip_size_kb": 0, 00:23:17.906 "state": "online", 00:23:17.906 "raid_level": "raid1", 00:23:17.906 "superblock": true, 00:23:17.906 "num_base_bdevs": 2, 00:23:17.906 "num_base_bdevs_discovered": 1, 00:23:17.906 "num_base_bdevs_operational": 1, 00:23:17.906 "base_bdevs_list": [ 00:23:17.906 { 00:23:17.906 "name": null, 00:23:17.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.906 "is_configured": false, 00:23:17.906 "data_offset": 2048, 00:23:17.906 "data_size": 63488 00:23:17.906 }, 00:23:17.906 { 00:23:17.906 "name": "BaseBdev2", 00:23:17.906 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:17.906 "is_configured": true, 00:23:17.906 "data_offset": 2048, 00:23:17.906 "data_size": 63488 00:23:17.906 } 00:23:17.906 ] 00:23:17.906 }' 00:23:17.906 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.906 12:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.474 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.733 "name": "raid_bdev1", 00:23:18.733 "uuid": "ed9bc9b9-e568-4e87-9a54-6a3435b16f71", 00:23:18.733 "strip_size_kb": 0, 00:23:18.733 "state": "online", 00:23:18.733 "raid_level": "raid1", 00:23:18.733 "superblock": true, 00:23:18.733 "num_base_bdevs": 2, 00:23:18.733 "num_base_bdevs_discovered": 1, 00:23:18.733 "num_base_bdevs_operational": 1, 00:23:18.733 "base_bdevs_list": [ 00:23:18.733 { 00:23:18.733 "name": null, 00:23:18.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.733 "is_configured": false, 00:23:18.733 "data_offset": 2048, 00:23:18.733 "data_size": 63488 00:23:18.733 }, 00:23:18.733 { 00:23:18.733 "name": "BaseBdev2", 00:23:18.733 "uuid": "981f48b9-052d-558f-bddf-444f181adea3", 00:23:18.733 "is_configured": true, 00:23:18.733 "data_offset": 2048, 00:23:18.733 "data_size": 63488 00:23:18.733 } 00:23:18.733 ] 00:23:18.733 }' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 38917 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 38917 ']' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 38917 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 38917 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 38917' 00:23:18.733 killing process with pid 38917 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 38917 00:23:18.733 Received shutdown signal, test time was about 26.327338 seconds 00:23:18.733 00:23:18.733 Latency(us) 00:23:18.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.733 =================================================================================================================== 00:23:18.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.733 [2024-07-25 12:05:04.828756] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:18.733 [2024-07-25 12:05:04.828847] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.733 [2024-07-25 12:05:04.828889] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.733 [2024-07-25 12:05:04.828900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x115a7e0 name raid_bdev1, state offline 00:23:18.733 12:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 38917 00:23:18.733 [2024-07-25 12:05:04.847739] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:19.017 12:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:23:19.017 00:23:19.017 real 0m30.659s 00:23:19.017 user 0m47.507s 00:23:19.017 sys 0m4.443s 00:23:19.017 12:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.017 12:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.017 ************************************ 00:23:19.017 END TEST raid_rebuild_test_sb_io 00:23:19.017 ************************************ 00:23:19.017 12:05:05 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:23:19.017 12:05:05 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:23:19.017 12:05:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:19.017 12:05:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.017 12:05:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:19.292 ************************************ 00:23:19.292 START TEST raid_rebuild_test 00:23:19.292 ************************************ 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:19.292 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=44403 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 44403 /var/tmp/spdk-raid.sock 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 44403 ']' 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:19.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.293 12:05:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.293 [2024-07-25 12:05:05.192536] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:19.293 [2024-07-25 12:05:05.192593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44403 ] 00:23:19.293 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:19.293 Zero copy mechanism will not be used. 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:19.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.293 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:19.293 [2024-07-25 12:05:05.324608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.552 [2024-07-25 12:05:05.411426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.552 [2024-07-25 12:05:05.474268] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.552 [2024-07-25 12:05:05.474296] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.120 12:05:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.120 12:05:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:23:20.120 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:20.120 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:20.378 BaseBdev1_malloc 00:23:20.378 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:20.637 [2024-07-25 12:05:06.522307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:20.637 [2024-07-25 12:05:06.522353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.637 [2024-07-25 12:05:06.522372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202b5f0 00:23:20.637 [2024-07-25 12:05:06.522383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.637 [2024-07-25 12:05:06.523816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.637 [2024-07-25 12:05:06.523842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:20.637 BaseBdev1 00:23:20.637 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:20.637 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:20.896 BaseBdev2_malloc 00:23:20.896 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:20.896 [2024-07-25 12:05:06.979770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:20.896 [2024-07-25 12:05:06.979808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.896 [2024-07-25 12:05:06.979824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21cf130 00:23:20.896 [2024-07-25 12:05:06.979835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.896 [2024-07-25 12:05:06.981148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.896 [2024-07-25 12:05:06.981173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:20.896 BaseBdev2 00:23:20.896 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:20.896 12:05:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:21.155 BaseBdev3_malloc 00:23:21.155 12:05:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:21.413 [2024-07-25 12:05:07.433074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:21.413 [2024-07-25 12:05:07.433110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.413 [2024-07-25 12:05:07.433125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21c5420 00:23:21.413 [2024-07-25 12:05:07.433136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.413 [2024-07-25 12:05:07.434391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.413 [2024-07-25 12:05:07.434415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:21.413 BaseBdev3 00:23:21.413 12:05:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:21.413 12:05:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:21.673 BaseBdev4_malloc 00:23:21.673 12:05:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:21.932 [2024-07-25 12:05:07.894441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:21.932 [2024-07-25 12:05:07.894477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.932 [2024-07-25 12:05:07.894493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21c5d40 00:23:21.932 [2024-07-25 12:05:07.894504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.932 [2024-07-25 12:05:07.895749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.932 [2024-07-25 12:05:07.895773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:21.932 BaseBdev4 00:23:21.932 12:05:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:22.191 spare_malloc 00:23:22.191 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:22.451 spare_delay 00:23:22.451 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:22.710 [2024-07-25 12:05:08.572229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:22.710 [2024-07-25 12:05:08.572263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.710 [2024-07-25 12:05:08.572279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2024db0 00:23:22.710 [2024-07-25 12:05:08.572291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.710 [2024-07-25 12:05:08.573543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.710 [2024-07-25 12:05:08.573568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:22.710 spare 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:22.710 [2024-07-25 12:05:08.796850] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.710 [2024-07-25 12:05:08.797934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.710 [2024-07-25 12:05:08.797981] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:22.710 [2024-07-25 12:05:08.798021] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:22.710 [2024-07-25 12:05:08.798098] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x20275b0 00:23:22.710 [2024-07-25 12:05:08.798108] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:22.710 [2024-07-25 12:05:08.798293] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x202a380 00:23:22.710 [2024-07-25 12:05:08.798428] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20275b0 00:23:22.710 [2024-07-25 12:05:08.798438] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20275b0 00:23:22.710 [2024-07-25 12:05:08.798535] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.710 12:05:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.969 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.969 "name": "raid_bdev1", 00:23:22.969 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:22.969 "strip_size_kb": 0, 00:23:22.969 "state": "online", 00:23:22.969 "raid_level": "raid1", 00:23:22.969 "superblock": false, 00:23:22.969 "num_base_bdevs": 4, 00:23:22.969 "num_base_bdevs_discovered": 4, 00:23:22.969 "num_base_bdevs_operational": 4, 00:23:22.969 "base_bdevs_list": [ 00:23:22.969 { 00:23:22.969 "name": "BaseBdev1", 00:23:22.969 "uuid": "ec36da05-c40d-5aa7-ac2d-d1fd7ab3e2d6", 00:23:22.969 "is_configured": true, 00:23:22.969 "data_offset": 0, 00:23:22.969 "data_size": 65536 00:23:22.969 }, 00:23:22.969 { 00:23:22.969 "name": "BaseBdev2", 00:23:22.969 "uuid": "c0d938ff-7cca-5634-994c-54970d74bda2", 00:23:22.969 "is_configured": true, 00:23:22.969 "data_offset": 0, 00:23:22.969 "data_size": 65536 00:23:22.969 }, 00:23:22.969 { 00:23:22.969 "name": "BaseBdev3", 00:23:22.969 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:22.969 "is_configured": true, 00:23:22.969 "data_offset": 0, 00:23:22.969 "data_size": 65536 00:23:22.969 }, 00:23:22.969 { 00:23:22.969 "name": "BaseBdev4", 00:23:22.969 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:22.969 "is_configured": true, 00:23:22.969 "data_offset": 0, 00:23:22.969 "data_size": 65536 00:23:22.969 } 00:23:22.969 ] 00:23:22.969 }' 00:23:22.969 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.969 12:05:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.536 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:23.537 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:23.796 [2024-07-25 12:05:09.819802] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.796 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:23:23.796 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.796 12:05:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.055 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:24.314 [2024-07-25 12:05:10.288792] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2027080 00:23:24.314 /dev/nbd0 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:24.314 1+0 records in 00:23:24.314 1+0 records out 00:23:24.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265536 s, 15.4 MB/s 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:24.314 12:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:30.876 65536+0 records in 00:23:30.876 65536+0 records out 00:23:30.876 33554432 bytes (34 MB, 32 MiB) copied, 5.76544 s, 5.8 MB/s 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:30.876 [2024-07-25 12:05:16.369906] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:30.876 [2024-07-25 12:05:16.590528] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.876 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.876 "name": "raid_bdev1", 00:23:30.876 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:30.876 "strip_size_kb": 0, 00:23:30.876 "state": "online", 00:23:30.876 "raid_level": "raid1", 00:23:30.877 "superblock": false, 00:23:30.877 "num_base_bdevs": 4, 00:23:30.877 "num_base_bdevs_discovered": 3, 00:23:30.877 "num_base_bdevs_operational": 3, 00:23:30.877 "base_bdevs_list": [ 00:23:30.877 { 00:23:30.877 "name": null, 00:23:30.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.877 "is_configured": false, 00:23:30.877 "data_offset": 0, 00:23:30.877 "data_size": 65536 00:23:30.877 }, 00:23:30.877 { 00:23:30.877 "name": "BaseBdev2", 00:23:30.877 "uuid": "c0d938ff-7cca-5634-994c-54970d74bda2", 00:23:30.877 "is_configured": true, 00:23:30.877 "data_offset": 0, 00:23:30.877 "data_size": 65536 00:23:30.877 }, 00:23:30.877 { 00:23:30.877 "name": "BaseBdev3", 00:23:30.877 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:30.877 "is_configured": true, 00:23:30.877 "data_offset": 0, 00:23:30.877 "data_size": 65536 00:23:30.877 }, 00:23:30.877 { 00:23:30.877 "name": "BaseBdev4", 00:23:30.877 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:30.877 "is_configured": true, 00:23:30.877 "data_offset": 0, 00:23:30.877 "data_size": 65536 00:23:30.877 } 00:23:30.877 ] 00:23:30.877 }' 00:23:30.877 12:05:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.877 12:05:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.443 12:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.702 [2024-07-25 12:05:17.605212] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.702 [2024-07-25 12:05:17.609117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x202a4a0 00:23:31.702 [2024-07-25 12:05:17.611195] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.702 12:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.639 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.902 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.902 "name": "raid_bdev1", 00:23:32.902 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:32.902 "strip_size_kb": 0, 00:23:32.902 "state": "online", 00:23:32.902 "raid_level": "raid1", 00:23:32.902 "superblock": false, 00:23:32.902 "num_base_bdevs": 4, 00:23:32.902 "num_base_bdevs_discovered": 4, 00:23:32.902 "num_base_bdevs_operational": 4, 00:23:32.902 "process": { 00:23:32.902 "type": "rebuild", 00:23:32.902 "target": "spare", 00:23:32.902 "progress": { 00:23:32.902 "blocks": 24576, 00:23:32.902 "percent": 37 00:23:32.902 } 00:23:32.902 }, 00:23:32.902 "base_bdevs_list": [ 00:23:32.902 { 00:23:32.902 "name": "spare", 00:23:32.902 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:32.902 "is_configured": true, 00:23:32.902 "data_offset": 0, 00:23:32.902 "data_size": 65536 00:23:32.902 }, 00:23:32.902 { 00:23:32.902 "name": "BaseBdev2", 00:23:32.902 "uuid": "c0d938ff-7cca-5634-994c-54970d74bda2", 00:23:32.902 "is_configured": true, 00:23:32.902 "data_offset": 0, 00:23:32.902 "data_size": 65536 00:23:32.902 }, 00:23:32.902 { 00:23:32.902 "name": "BaseBdev3", 00:23:32.902 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:32.902 "is_configured": true, 00:23:32.902 "data_offset": 0, 00:23:32.902 "data_size": 65536 00:23:32.902 }, 00:23:32.902 { 00:23:32.902 "name": "BaseBdev4", 00:23:32.902 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:32.902 "is_configured": true, 00:23:32.902 "data_offset": 0, 00:23:32.902 "data_size": 65536 00:23:32.902 } 00:23:32.902 ] 00:23:32.902 }' 00:23:32.902 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.902 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.902 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.902 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.902 12:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:33.165 [2024-07-25 12:05:19.152204] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.165 [2024-07-25 12:05:19.222953] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.165 [2024-07-25 12:05:19.222994] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.165 [2024-07-25 12:05:19.223010] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.165 [2024-07-25 12:05:19.223018] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.165 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.424 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.424 "name": "raid_bdev1", 00:23:33.424 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:33.424 "strip_size_kb": 0, 00:23:33.424 "state": "online", 00:23:33.424 "raid_level": "raid1", 00:23:33.424 "superblock": false, 00:23:33.424 "num_base_bdevs": 4, 00:23:33.424 "num_base_bdevs_discovered": 3, 00:23:33.424 "num_base_bdevs_operational": 3, 00:23:33.424 "base_bdevs_list": [ 00:23:33.424 { 00:23:33.424 "name": null, 00:23:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.424 "is_configured": false, 00:23:33.424 "data_offset": 0, 00:23:33.424 "data_size": 65536 00:23:33.424 }, 00:23:33.424 { 00:23:33.424 "name": "BaseBdev2", 00:23:33.424 "uuid": "c0d938ff-7cca-5634-994c-54970d74bda2", 00:23:33.424 "is_configured": true, 00:23:33.424 "data_offset": 0, 00:23:33.424 "data_size": 65536 00:23:33.424 }, 00:23:33.424 { 00:23:33.424 "name": "BaseBdev3", 00:23:33.424 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:33.424 "is_configured": true, 00:23:33.424 "data_offset": 0, 00:23:33.424 "data_size": 65536 00:23:33.424 }, 00:23:33.424 { 00:23:33.424 "name": "BaseBdev4", 00:23:33.424 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:33.424 "is_configured": true, 00:23:33.424 "data_offset": 0, 00:23:33.424 "data_size": 65536 00:23:33.424 } 00:23:33.424 ] 00:23:33.424 }' 00:23:33.424 12:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.424 12:05:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.992 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.251 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.251 "name": "raid_bdev1", 00:23:34.251 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:34.251 "strip_size_kb": 0, 00:23:34.251 "state": "online", 00:23:34.251 "raid_level": "raid1", 00:23:34.251 "superblock": false, 00:23:34.251 "num_base_bdevs": 4, 00:23:34.251 "num_base_bdevs_discovered": 3, 00:23:34.251 "num_base_bdevs_operational": 3, 00:23:34.251 "base_bdevs_list": [ 00:23:34.251 { 00:23:34.251 "name": null, 00:23:34.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.251 "is_configured": false, 00:23:34.251 "data_offset": 0, 00:23:34.251 "data_size": 65536 00:23:34.251 }, 00:23:34.251 { 00:23:34.251 "name": "BaseBdev2", 00:23:34.251 "uuid": "c0d938ff-7cca-5634-994c-54970d74bda2", 00:23:34.251 "is_configured": true, 00:23:34.251 "data_offset": 0, 00:23:34.251 "data_size": 65536 00:23:34.251 }, 00:23:34.251 { 00:23:34.251 "name": "BaseBdev3", 00:23:34.251 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:34.251 "is_configured": true, 00:23:34.251 "data_offset": 0, 00:23:34.251 "data_size": 65536 00:23:34.251 }, 00:23:34.251 { 00:23:34.251 "name": "BaseBdev4", 00:23:34.251 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:34.251 "is_configured": true, 00:23:34.251 "data_offset": 0, 00:23:34.251 "data_size": 65536 00:23:34.251 } 00:23:34.251 ] 00:23:34.251 }' 00:23:34.251 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:34.251 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:34.251 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:34.509 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:34.509 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:34.509 [2024-07-25 12:05:20.578541] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.509 [2024-07-25 12:05:20.582408] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2023490 00:23:34.509 [2024-07-25 12:05:20.583802] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:34.509 12:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.886 "name": "raid_bdev1", 00:23:35.886 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:35.886 "strip_size_kb": 0, 00:23:35.886 "state": "online", 00:23:35.886 "raid_level": "raid1", 00:23:35.886 "superblock": false, 00:23:35.886 "num_base_bdevs": 4, 00:23:35.886 "num_base_bdevs_discovered": 4, 00:23:35.886 "num_base_bdevs_operational": 4, 00:23:35.886 "process": { 00:23:35.886 "type": "rebuild", 00:23:35.886 "target": "spare", 00:23:35.886 "progress": { 00:23:35.886 "blocks": 24576, 00:23:35.886 "percent": 37 00:23:35.886 } 00:23:35.886 }, 00:23:35.886 "base_bdevs_list": [ 00:23:35.886 { 00:23:35.886 "name": "spare", 00:23:35.886 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:35.886 "is_configured": true, 00:23:35.886 "data_offset": 0, 00:23:35.886 "data_size": 65536 00:23:35.886 }, 00:23:35.886 { 00:23:35.886 "name": "BaseBdev2", 00:23:35.886 "uuid": "c0d938ff-7cca-5634-994c-54970d74bda2", 00:23:35.886 "is_configured": true, 00:23:35.886 "data_offset": 0, 00:23:35.886 "data_size": 65536 00:23:35.886 }, 00:23:35.886 { 00:23:35.886 "name": "BaseBdev3", 00:23:35.886 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:35.886 "is_configured": true, 00:23:35.886 "data_offset": 0, 00:23:35.886 "data_size": 65536 00:23:35.886 }, 00:23:35.886 { 00:23:35.886 "name": "BaseBdev4", 00:23:35.886 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:35.886 "is_configured": true, 00:23:35.886 "data_offset": 0, 00:23:35.886 "data_size": 65536 00:23:35.886 } 00:23:35.886 ] 00:23:35.886 }' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:23:35.886 12:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:36.145 [2024-07-25 12:05:22.136773] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:36.145 [2024-07-25 12:05:22.195460] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2023490 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.145 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.434 "name": "raid_bdev1", 00:23:36.434 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:36.434 "strip_size_kb": 0, 00:23:36.434 "state": "online", 00:23:36.434 "raid_level": "raid1", 00:23:36.434 "superblock": false, 00:23:36.434 "num_base_bdevs": 4, 00:23:36.434 "num_base_bdevs_discovered": 3, 00:23:36.434 "num_base_bdevs_operational": 3, 00:23:36.434 "process": { 00:23:36.434 "type": "rebuild", 00:23:36.434 "target": "spare", 00:23:36.434 "progress": { 00:23:36.434 "blocks": 36864, 00:23:36.434 "percent": 56 00:23:36.434 } 00:23:36.434 }, 00:23:36.434 "base_bdevs_list": [ 00:23:36.434 { 00:23:36.434 "name": "spare", 00:23:36.434 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 }, 00:23:36.434 { 00:23:36.434 "name": null, 00:23:36.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.434 "is_configured": false, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 }, 00:23:36.434 { 00:23:36.434 "name": "BaseBdev3", 00:23:36.434 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 }, 00:23:36.434 { 00:23:36.434 "name": "BaseBdev4", 00:23:36.434 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 } 00:23:36.434 ] 00:23:36.434 }' 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=827 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.434 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.694 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.694 "name": "raid_bdev1", 00:23:36.694 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:36.694 "strip_size_kb": 0, 00:23:36.694 "state": "online", 00:23:36.694 "raid_level": "raid1", 00:23:36.694 "superblock": false, 00:23:36.694 "num_base_bdevs": 4, 00:23:36.694 "num_base_bdevs_discovered": 3, 00:23:36.694 "num_base_bdevs_operational": 3, 00:23:36.694 "process": { 00:23:36.694 "type": "rebuild", 00:23:36.694 "target": "spare", 00:23:36.694 "progress": { 00:23:36.694 "blocks": 43008, 00:23:36.694 "percent": 65 00:23:36.694 } 00:23:36.694 }, 00:23:36.694 "base_bdevs_list": [ 00:23:36.694 { 00:23:36.694 "name": "spare", 00:23:36.694 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:36.694 "is_configured": true, 00:23:36.694 "data_offset": 0, 00:23:36.694 "data_size": 65536 00:23:36.694 }, 00:23:36.694 { 00:23:36.694 "name": null, 00:23:36.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.694 "is_configured": false, 00:23:36.694 "data_offset": 0, 00:23:36.694 "data_size": 65536 00:23:36.694 }, 00:23:36.694 { 00:23:36.694 "name": "BaseBdev3", 00:23:36.694 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:36.694 "is_configured": true, 00:23:36.694 "data_offset": 0, 00:23:36.694 "data_size": 65536 00:23:36.694 }, 00:23:36.694 { 00:23:36.694 "name": "BaseBdev4", 00:23:36.694 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:36.694 "is_configured": true, 00:23:36.694 "data_offset": 0, 00:23:36.694 "data_size": 65536 00:23:36.694 } 00:23:36.694 ] 00:23:36.694 }' 00:23:36.694 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.694 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.694 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.953 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.953 12:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:37.888 [2024-07-25 12:05:23.807143] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:37.888 [2024-07-25 12:05:23.807198] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:37.888 [2024-07-25 12:05:23.807236] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.888 12:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.148 "name": "raid_bdev1", 00:23:38.148 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:38.148 "strip_size_kb": 0, 00:23:38.148 "state": "online", 00:23:38.148 "raid_level": "raid1", 00:23:38.148 "superblock": false, 00:23:38.148 "num_base_bdevs": 4, 00:23:38.148 "num_base_bdevs_discovered": 3, 00:23:38.148 "num_base_bdevs_operational": 3, 00:23:38.148 "base_bdevs_list": [ 00:23:38.148 { 00:23:38.148 "name": "spare", 00:23:38.148 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:38.148 "is_configured": true, 00:23:38.148 "data_offset": 0, 00:23:38.148 "data_size": 65536 00:23:38.148 }, 00:23:38.148 { 00:23:38.148 "name": null, 00:23:38.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.148 "is_configured": false, 00:23:38.148 "data_offset": 0, 00:23:38.148 "data_size": 65536 00:23:38.148 }, 00:23:38.148 { 00:23:38.148 "name": "BaseBdev3", 00:23:38.148 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:38.148 "is_configured": true, 00:23:38.148 "data_offset": 0, 00:23:38.148 "data_size": 65536 00:23:38.148 }, 00:23:38.148 { 00:23:38.148 "name": "BaseBdev4", 00:23:38.148 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:38.148 "is_configured": true, 00:23:38.148 "data_offset": 0, 00:23:38.148 "data_size": 65536 00:23:38.148 } 00:23:38.148 ] 00:23:38.148 }' 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.148 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.407 "name": "raid_bdev1", 00:23:38.407 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:38.407 "strip_size_kb": 0, 00:23:38.407 "state": "online", 00:23:38.407 "raid_level": "raid1", 00:23:38.407 "superblock": false, 00:23:38.407 "num_base_bdevs": 4, 00:23:38.407 "num_base_bdevs_discovered": 3, 00:23:38.407 "num_base_bdevs_operational": 3, 00:23:38.407 "base_bdevs_list": [ 00:23:38.407 { 00:23:38.407 "name": "spare", 00:23:38.407 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:38.407 "is_configured": true, 00:23:38.407 "data_offset": 0, 00:23:38.407 "data_size": 65536 00:23:38.407 }, 00:23:38.407 { 00:23:38.407 "name": null, 00:23:38.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.407 "is_configured": false, 00:23:38.407 "data_offset": 0, 00:23:38.407 "data_size": 65536 00:23:38.407 }, 00:23:38.407 { 00:23:38.407 "name": "BaseBdev3", 00:23:38.407 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:38.407 "is_configured": true, 00:23:38.407 "data_offset": 0, 00:23:38.407 "data_size": 65536 00:23:38.407 }, 00:23:38.407 { 00:23:38.407 "name": "BaseBdev4", 00:23:38.407 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:38.407 "is_configured": true, 00:23:38.407 "data_offset": 0, 00:23:38.407 "data_size": 65536 00:23:38.407 } 00:23:38.407 ] 00:23:38.407 }' 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.407 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.666 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.666 "name": "raid_bdev1", 00:23:38.666 "uuid": "dbe705ff-fb6e-43ca-a2f1-0584db90b9c6", 00:23:38.666 "strip_size_kb": 0, 00:23:38.666 "state": "online", 00:23:38.666 "raid_level": "raid1", 00:23:38.666 "superblock": false, 00:23:38.666 "num_base_bdevs": 4, 00:23:38.666 "num_base_bdevs_discovered": 3, 00:23:38.666 "num_base_bdevs_operational": 3, 00:23:38.666 "base_bdevs_list": [ 00:23:38.666 { 00:23:38.666 "name": "spare", 00:23:38.666 "uuid": "78edd0f4-ebd3-5296-99db-e3cef155596d", 00:23:38.666 "is_configured": true, 00:23:38.666 "data_offset": 0, 00:23:38.666 "data_size": 65536 00:23:38.666 }, 00:23:38.666 { 00:23:38.666 "name": null, 00:23:38.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.666 "is_configured": false, 00:23:38.666 "data_offset": 0, 00:23:38.666 "data_size": 65536 00:23:38.666 }, 00:23:38.666 { 00:23:38.666 "name": "BaseBdev3", 00:23:38.666 "uuid": "1a7c5a37-292d-587f-aa49-4c52713fd213", 00:23:38.666 "is_configured": true, 00:23:38.666 "data_offset": 0, 00:23:38.666 "data_size": 65536 00:23:38.666 }, 00:23:38.666 { 00:23:38.666 "name": "BaseBdev4", 00:23:38.666 "uuid": "b3871d7c-b979-5206-a4e4-428f9ef1aebe", 00:23:38.666 "is_configured": true, 00:23:38.666 "data_offset": 0, 00:23:38.666 "data_size": 65536 00:23:38.666 } 00:23:38.666 ] 00:23:38.666 }' 00:23:38.666 12:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.666 12:05:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.234 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:39.493 [2024-07-25 12:05:25.419467] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.493 [2024-07-25 12:05:25.419491] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.493 [2024-07-25 12:05:25.419545] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.493 [2024-07-25 12:05:25.419610] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.493 [2024-07-25 12:05:25.419621] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20275b0 name raid_bdev1, state offline 00:23:39.493 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.493 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:39.752 /dev/nbd0 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:39.752 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.753 1+0 records in 00:23:39.753 1+0 records out 00:23:39.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236612 s, 17.3 MB/s 00:23:39.753 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:40.011 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:40.011 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:40.011 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:40.011 12:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:40.011 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.011 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:40.012 12:05:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:40.012 /dev/nbd1 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.012 1+0 records in 00:23:40.012 1+0 records out 00:23:40.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307035 s, 13.3 MB/s 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:40.012 12:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.271 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 44403 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 44403 ']' 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 44403 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 44403 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 44403' 00:23:40.531 killing process with pid 44403 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 44403 00:23:40.531 Received shutdown signal, test time was about 60.000000 seconds 00:23:40.531 00:23:40.531 Latency(us) 00:23:40.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.531 =================================================================================================================== 00:23:40.531 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.531 [2024-07-25 12:05:26.625274] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.531 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 44403 00:23:40.790 [2024-07-25 12:05:26.664306] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.790 12:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:23:40.790 00:23:40.790 real 0m21.732s 00:23:40.790 user 0m30.084s 00:23:40.790 sys 0m4.422s 00:23:40.790 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:40.790 12:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 ************************************ 00:23:40.790 END TEST raid_rebuild_test 00:23:40.790 ************************************ 00:23:40.790 12:05:26 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:23:40.790 12:05:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:40.790 12:05:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:40.790 12:05:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:41.049 ************************************ 00:23:41.049 START TEST raid_rebuild_test_sb 00:23:41.049 ************************************ 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=48335 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 48335 /var/tmp/spdk-raid.sock 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 48335 ']' 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:41.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.049 12:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:41.049 [2024-07-25 12:05:26.997005] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:41.049 [2024-07-25 12:05:26.997062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48335 ] 00:23:41.049 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:41.049 Zero copy mechanism will not be used. 00:23:41.049 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.049 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:41.049 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.049 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:41.049 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.049 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.050 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:41.050 [2024-07-25 12:05:27.128626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.309 [2024-07-25 12:05:27.214666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.309 [2024-07-25 12:05:27.275438] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.309 [2024-07-25 12:05:27.275474] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.875 12:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.875 12:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:41.875 12:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:41.876 12:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:42.134 BaseBdev1_malloc 00:23:42.134 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:42.134 [2024-07-25 12:05:28.251980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:42.134 [2024-07-25 12:05:28.252020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.134 [2024-07-25 12:05:28.252040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c425f0 00:23:42.134 [2024-07-25 12:05:28.252051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.392 [2024-07-25 12:05:28.253657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.392 [2024-07-25 12:05:28.253685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:42.392 BaseBdev1 00:23:42.392 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:42.392 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:42.392 BaseBdev2_malloc 00:23:42.392 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:42.650 [2024-07-25 12:05:28.625501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:42.650 [2024-07-25 12:05:28.625541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.650 [2024-07-25 12:05:28.625558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1de6130 00:23:42.650 [2024-07-25 12:05:28.625569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.650 [2024-07-25 12:05:28.626984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.650 [2024-07-25 12:05:28.627010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:42.650 BaseBdev2 00:23:42.650 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:42.650 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:42.909 BaseBdev3_malloc 00:23:42.909 12:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:42.909 [2024-07-25 12:05:28.995064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:42.909 [2024-07-25 12:05:28.995105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.909 [2024-07-25 12:05:28.995123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ddc420 00:23:42.909 [2024-07-25 12:05:28.995134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.909 [2024-07-25 12:05:28.996480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.909 [2024-07-25 12:05:28.996506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:42.909 BaseBdev3 00:23:42.909 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:42.909 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:43.167 BaseBdev4_malloc 00:23:43.167 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:43.425 [2024-07-25 12:05:29.368256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:43.425 [2024-07-25 12:05:29.368298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.425 [2024-07-25 12:05:29.368315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ddcd40 00:23:43.425 [2024-07-25 12:05:29.368326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.425 [2024-07-25 12:05:29.369676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.425 [2024-07-25 12:05:29.369701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:43.425 BaseBdev4 00:23:43.425 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:43.425 spare_malloc 00:23:43.425 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:43.684 spare_delay 00:23:43.684 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:43.943 [2024-07-25 12:05:29.946004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:43.943 [2024-07-25 12:05:29.946044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.943 [2024-07-25 12:05:29.946063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c3bdb0 00:23:43.943 [2024-07-25 12:05:29.946074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.943 [2024-07-25 12:05:29.947474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.943 [2024-07-25 12:05:29.947500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:43.943 spare 00:23:43.943 12:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:44.510 [2024-07-25 12:05:30.447351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:44.510 [2024-07-25 12:05:30.448633] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:44.510 [2024-07-25 12:05:30.448685] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:44.510 [2024-07-25 12:05:30.448728] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:44.510 [2024-07-25 12:05:30.448911] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c3e5b0 00:23:44.510 [2024-07-25 12:05:30.448922] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:44.510 [2024-07-25 12:05:30.449108] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3e580 00:23:44.510 [2024-07-25 12:05:30.449259] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c3e5b0 00:23:44.510 [2024-07-25 12:05:30.449269] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c3e5b0 00:23:44.510 [2024-07-25 12:05:30.449364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.510 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.768 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.768 "name": "raid_bdev1", 00:23:44.768 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:44.768 "strip_size_kb": 0, 00:23:44.768 "state": "online", 00:23:44.768 "raid_level": "raid1", 00:23:44.768 "superblock": true, 00:23:44.768 "num_base_bdevs": 4, 00:23:44.768 "num_base_bdevs_discovered": 4, 00:23:44.768 "num_base_bdevs_operational": 4, 00:23:44.768 "base_bdevs_list": [ 00:23:44.768 { 00:23:44.768 "name": "BaseBdev1", 00:23:44.768 "uuid": "75b24578-02c2-564a-bd96-d361bd6f8454", 00:23:44.768 "is_configured": true, 00:23:44.768 "data_offset": 2048, 00:23:44.768 "data_size": 63488 00:23:44.768 }, 00:23:44.768 { 00:23:44.768 "name": "BaseBdev2", 00:23:44.768 "uuid": "add47503-51a4-5f69-a0ee-c44e7029a071", 00:23:44.768 "is_configured": true, 00:23:44.768 "data_offset": 2048, 00:23:44.768 "data_size": 63488 00:23:44.768 }, 00:23:44.768 { 00:23:44.768 "name": "BaseBdev3", 00:23:44.768 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:44.768 "is_configured": true, 00:23:44.768 "data_offset": 2048, 00:23:44.768 "data_size": 63488 00:23:44.768 }, 00:23:44.768 { 00:23:44.768 "name": "BaseBdev4", 00:23:44.768 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:44.768 "is_configured": true, 00:23:44.768 "data_offset": 2048, 00:23:44.768 "data_size": 63488 00:23:44.768 } 00:23:44.768 ] 00:23:44.768 }' 00:23:44.769 12:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.769 12:05:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.336 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:45.336 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:45.595 [2024-07-25 12:05:31.474290] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.595 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:23:45.595 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.595 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:45.854 [2024-07-25 12:05:31.927236] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ddbc60 00:23:45.854 /dev/nbd0 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:45.854 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:46.113 1+0 records in 00:23:46.113 1+0 records out 00:23:46.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217115 s, 18.9 MB/s 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:46.113 12:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:52.674 63488+0 records in 00:23:52.674 63488+0 records out 00:23:52.674 32505856 bytes (33 MB, 31 MiB) copied, 5.66646 s, 5.7 MB/s 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:52.674 [2024-07-25 12:05:37.905812] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.674 12:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:52.674 [2024-07-25 12:05:38.130520] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:52.674 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:52.674 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.674 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.675 "name": "raid_bdev1", 00:23:52.675 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:52.675 "strip_size_kb": 0, 00:23:52.675 "state": "online", 00:23:52.675 "raid_level": "raid1", 00:23:52.675 "superblock": true, 00:23:52.675 "num_base_bdevs": 4, 00:23:52.675 "num_base_bdevs_discovered": 3, 00:23:52.675 "num_base_bdevs_operational": 3, 00:23:52.675 "base_bdevs_list": [ 00:23:52.675 { 00:23:52.675 "name": null, 00:23:52.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.675 "is_configured": false, 00:23:52.675 "data_offset": 2048, 00:23:52.675 "data_size": 63488 00:23:52.675 }, 00:23:52.675 { 00:23:52.675 "name": "BaseBdev2", 00:23:52.675 "uuid": "add47503-51a4-5f69-a0ee-c44e7029a071", 00:23:52.675 "is_configured": true, 00:23:52.675 "data_offset": 2048, 00:23:52.675 "data_size": 63488 00:23:52.675 }, 00:23:52.675 { 00:23:52.675 "name": "BaseBdev3", 00:23:52.675 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:52.675 "is_configured": true, 00:23:52.675 "data_offset": 2048, 00:23:52.675 "data_size": 63488 00:23:52.675 }, 00:23:52.675 { 00:23:52.675 "name": "BaseBdev4", 00:23:52.675 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:52.675 "is_configured": true, 00:23:52.675 "data_offset": 2048, 00:23:52.675 "data_size": 63488 00:23:52.675 } 00:23:52.675 ] 00:23:52.675 }' 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.675 12:05:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.934 12:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:53.192 [2024-07-25 12:05:39.153609] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:53.192 [2024-07-25 12:05:39.157475] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3e580 00:23:53.192 [2024-07-25 12:05:39.159543] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.192 12:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.128 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.387 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.387 "name": "raid_bdev1", 00:23:54.387 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:54.387 "strip_size_kb": 0, 00:23:54.387 "state": "online", 00:23:54.387 "raid_level": "raid1", 00:23:54.387 "superblock": true, 00:23:54.387 "num_base_bdevs": 4, 00:23:54.387 "num_base_bdevs_discovered": 4, 00:23:54.387 "num_base_bdevs_operational": 4, 00:23:54.387 "process": { 00:23:54.387 "type": "rebuild", 00:23:54.387 "target": "spare", 00:23:54.387 "progress": { 00:23:54.387 "blocks": 24576, 00:23:54.387 "percent": 38 00:23:54.387 } 00:23:54.387 }, 00:23:54.387 "base_bdevs_list": [ 00:23:54.387 { 00:23:54.387 "name": "spare", 00:23:54.387 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:23:54.387 "is_configured": true, 00:23:54.387 "data_offset": 2048, 00:23:54.387 "data_size": 63488 00:23:54.387 }, 00:23:54.387 { 00:23:54.387 "name": "BaseBdev2", 00:23:54.387 "uuid": "add47503-51a4-5f69-a0ee-c44e7029a071", 00:23:54.387 "is_configured": true, 00:23:54.387 "data_offset": 2048, 00:23:54.387 "data_size": 63488 00:23:54.387 }, 00:23:54.387 { 00:23:54.387 "name": "BaseBdev3", 00:23:54.387 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:54.387 "is_configured": true, 00:23:54.387 "data_offset": 2048, 00:23:54.387 "data_size": 63488 00:23:54.387 }, 00:23:54.387 { 00:23:54.387 "name": "BaseBdev4", 00:23:54.387 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:54.387 "is_configured": true, 00:23:54.387 "data_offset": 2048, 00:23:54.387 "data_size": 63488 00:23:54.387 } 00:23:54.387 ] 00:23:54.387 }' 00:23:54.387 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:54.387 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.387 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:54.387 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.387 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:54.646 [2024-07-25 12:05:40.708545] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:54.905 [2024-07-25 12:05:40.771284] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:54.905 [2024-07-25 12:05:40.771325] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.905 [2024-07-25 12:05:40.771340] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:54.905 [2024-07-25 12:05:40.771348] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.905 12:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.194 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.194 "name": "raid_bdev1", 00:23:55.194 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:55.194 "strip_size_kb": 0, 00:23:55.194 "state": "online", 00:23:55.194 "raid_level": "raid1", 00:23:55.194 "superblock": true, 00:23:55.194 "num_base_bdevs": 4, 00:23:55.194 "num_base_bdevs_discovered": 3, 00:23:55.194 "num_base_bdevs_operational": 3, 00:23:55.194 "base_bdevs_list": [ 00:23:55.194 { 00:23:55.194 "name": null, 00:23:55.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.194 "is_configured": false, 00:23:55.194 "data_offset": 2048, 00:23:55.194 "data_size": 63488 00:23:55.194 }, 00:23:55.194 { 00:23:55.194 "name": "BaseBdev2", 00:23:55.194 "uuid": "add47503-51a4-5f69-a0ee-c44e7029a071", 00:23:55.194 "is_configured": true, 00:23:55.194 "data_offset": 2048, 00:23:55.194 "data_size": 63488 00:23:55.194 }, 00:23:55.194 { 00:23:55.194 "name": "BaseBdev3", 00:23:55.194 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:55.194 "is_configured": true, 00:23:55.194 "data_offset": 2048, 00:23:55.194 "data_size": 63488 00:23:55.194 }, 00:23:55.194 { 00:23:55.194 "name": "BaseBdev4", 00:23:55.194 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:55.194 "is_configured": true, 00:23:55.194 "data_offset": 2048, 00:23:55.194 "data_size": 63488 00:23:55.194 } 00:23:55.194 ] 00:23:55.194 }' 00:23:55.194 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.194 12:05:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.761 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:55.762 "name": "raid_bdev1", 00:23:55.762 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:55.762 "strip_size_kb": 0, 00:23:55.762 "state": "online", 00:23:55.762 "raid_level": "raid1", 00:23:55.762 "superblock": true, 00:23:55.762 "num_base_bdevs": 4, 00:23:55.762 "num_base_bdevs_discovered": 3, 00:23:55.762 "num_base_bdevs_operational": 3, 00:23:55.762 "base_bdevs_list": [ 00:23:55.762 { 00:23:55.762 "name": null, 00:23:55.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.762 "is_configured": false, 00:23:55.762 "data_offset": 2048, 00:23:55.762 "data_size": 63488 00:23:55.762 }, 00:23:55.762 { 00:23:55.762 "name": "BaseBdev2", 00:23:55.762 "uuid": "add47503-51a4-5f69-a0ee-c44e7029a071", 00:23:55.762 "is_configured": true, 00:23:55.762 "data_offset": 2048, 00:23:55.762 "data_size": 63488 00:23:55.762 }, 00:23:55.762 { 00:23:55.762 "name": "BaseBdev3", 00:23:55.762 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:55.762 "is_configured": true, 00:23:55.762 "data_offset": 2048, 00:23:55.762 "data_size": 63488 00:23:55.762 }, 00:23:55.762 { 00:23:55.762 "name": "BaseBdev4", 00:23:55.762 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:55.762 "is_configured": true, 00:23:55.762 "data_offset": 2048, 00:23:55.762 "data_size": 63488 00:23:55.762 } 00:23:55.762 ] 00:23:55.762 }' 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:55.762 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.021 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.021 12:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:56.021 [2024-07-25 12:05:42.122454] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:56.021 [2024-07-25 12:05:42.126358] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3e580 00:23:56.021 [2024-07-25 12:05:42.127752] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:56.279 12:05:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.215 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:57.473 "name": "raid_bdev1", 00:23:57.473 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:57.473 "strip_size_kb": 0, 00:23:57.473 "state": "online", 00:23:57.473 "raid_level": "raid1", 00:23:57.473 "superblock": true, 00:23:57.473 "num_base_bdevs": 4, 00:23:57.473 "num_base_bdevs_discovered": 4, 00:23:57.473 "num_base_bdevs_operational": 4, 00:23:57.473 "process": { 00:23:57.473 "type": "rebuild", 00:23:57.473 "target": "spare", 00:23:57.473 "progress": { 00:23:57.473 "blocks": 24576, 00:23:57.473 "percent": 38 00:23:57.473 } 00:23:57.473 }, 00:23:57.473 "base_bdevs_list": [ 00:23:57.473 { 00:23:57.473 "name": "spare", 00:23:57.473 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:23:57.473 "is_configured": true, 00:23:57.473 "data_offset": 2048, 00:23:57.473 "data_size": 63488 00:23:57.473 }, 00:23:57.473 { 00:23:57.473 "name": "BaseBdev2", 00:23:57.473 "uuid": "add47503-51a4-5f69-a0ee-c44e7029a071", 00:23:57.473 "is_configured": true, 00:23:57.473 "data_offset": 2048, 00:23:57.473 "data_size": 63488 00:23:57.473 }, 00:23:57.473 { 00:23:57.473 "name": "BaseBdev3", 00:23:57.473 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:57.473 "is_configured": true, 00:23:57.473 "data_offset": 2048, 00:23:57.473 "data_size": 63488 00:23:57.473 }, 00:23:57.473 { 00:23:57.473 "name": "BaseBdev4", 00:23:57.473 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:57.473 "is_configured": true, 00:23:57.473 "data_offset": 2048, 00:23:57.473 "data_size": 63488 00:23:57.473 } 00:23:57.473 ] 00:23:57.473 }' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:57.473 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:23:57.473 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:57.731 [2024-07-25 12:05:43.680792] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:57.731 [2024-07-25 12:05:43.839760] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1c3e580 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.989 12:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.989 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:57.989 "name": "raid_bdev1", 00:23:57.989 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:57.989 "strip_size_kb": 0, 00:23:57.989 "state": "online", 00:23:57.989 "raid_level": "raid1", 00:23:57.989 "superblock": true, 00:23:57.989 "num_base_bdevs": 4, 00:23:57.989 "num_base_bdevs_discovered": 3, 00:23:57.989 "num_base_bdevs_operational": 3, 00:23:57.989 "process": { 00:23:57.989 "type": "rebuild", 00:23:57.989 "target": "spare", 00:23:57.989 "progress": { 00:23:57.989 "blocks": 36864, 00:23:57.989 "percent": 58 00:23:57.989 } 00:23:57.989 }, 00:23:57.989 "base_bdevs_list": [ 00:23:57.989 { 00:23:57.989 "name": "spare", 00:23:57.989 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:23:57.989 "is_configured": true, 00:23:57.989 "data_offset": 2048, 00:23:57.989 "data_size": 63488 00:23:57.989 }, 00:23:57.989 { 00:23:57.989 "name": null, 00:23:57.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.989 "is_configured": false, 00:23:57.989 "data_offset": 2048, 00:23:57.989 "data_size": 63488 00:23:57.989 }, 00:23:57.989 { 00:23:57.989 "name": "BaseBdev3", 00:23:57.989 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:57.989 "is_configured": true, 00:23:57.989 "data_offset": 2048, 00:23:57.989 "data_size": 63488 00:23:57.989 }, 00:23:57.989 { 00:23:57.989 "name": "BaseBdev4", 00:23:57.989 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:57.989 "is_configured": true, 00:23:57.989 "data_offset": 2048, 00:23:57.989 "data_size": 63488 00:23:57.989 } 00:23:57.989 ] 00:23:57.989 }' 00:23:57.989 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:58.247 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.247 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:58.247 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.247 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=849 00:23:58.247 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:58.247 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.248 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:58.248 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:58.248 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:58.248 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:58.248 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.248 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.507 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.507 "name": "raid_bdev1", 00:23:58.507 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:58.507 "strip_size_kb": 0, 00:23:58.507 "state": "online", 00:23:58.507 "raid_level": "raid1", 00:23:58.507 "superblock": true, 00:23:58.507 "num_base_bdevs": 4, 00:23:58.507 "num_base_bdevs_discovered": 3, 00:23:58.507 "num_base_bdevs_operational": 3, 00:23:58.507 "process": { 00:23:58.507 "type": "rebuild", 00:23:58.507 "target": "spare", 00:23:58.507 "progress": { 00:23:58.507 "blocks": 43008, 00:23:58.507 "percent": 67 00:23:58.507 } 00:23:58.507 }, 00:23:58.507 "base_bdevs_list": [ 00:23:58.507 { 00:23:58.507 "name": "spare", 00:23:58.507 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:23:58.507 "is_configured": true, 00:23:58.507 "data_offset": 2048, 00:23:58.507 "data_size": 63488 00:23:58.507 }, 00:23:58.507 { 00:23:58.507 "name": null, 00:23:58.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.507 "is_configured": false, 00:23:58.507 "data_offset": 2048, 00:23:58.507 "data_size": 63488 00:23:58.507 }, 00:23:58.507 { 00:23:58.507 "name": "BaseBdev3", 00:23:58.507 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:58.507 "is_configured": true, 00:23:58.507 "data_offset": 2048, 00:23:58.507 "data_size": 63488 00:23:58.507 }, 00:23:58.507 { 00:23:58.507 "name": "BaseBdev4", 00:23:58.507 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:58.507 "is_configured": true, 00:23:58.507 "data_offset": 2048, 00:23:58.507 "data_size": 63488 00:23:58.507 } 00:23:58.507 ] 00:23:58.507 }' 00:23:58.507 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:58.507 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.507 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:58.507 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.507 12:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:59.441 [2024-07-25 12:05:45.350663] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:59.441 [2024-07-25 12:05:45.350714] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:59.442 [2024-07-25 12:05:45.350803] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.442 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.700 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.700 "name": "raid_bdev1", 00:23:59.700 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:59.700 "strip_size_kb": 0, 00:23:59.700 "state": "online", 00:23:59.700 "raid_level": "raid1", 00:23:59.700 "superblock": true, 00:23:59.700 "num_base_bdevs": 4, 00:23:59.700 "num_base_bdevs_discovered": 3, 00:23:59.700 "num_base_bdevs_operational": 3, 00:23:59.700 "base_bdevs_list": [ 00:23:59.700 { 00:23:59.700 "name": "spare", 00:23:59.700 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:23:59.700 "is_configured": true, 00:23:59.700 "data_offset": 2048, 00:23:59.700 "data_size": 63488 00:23:59.700 }, 00:23:59.700 { 00:23:59.700 "name": null, 00:23:59.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.700 "is_configured": false, 00:23:59.700 "data_offset": 2048, 00:23:59.700 "data_size": 63488 00:23:59.700 }, 00:23:59.700 { 00:23:59.700 "name": "BaseBdev3", 00:23:59.700 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:59.700 "is_configured": true, 00:23:59.700 "data_offset": 2048, 00:23:59.700 "data_size": 63488 00:23:59.700 }, 00:23:59.700 { 00:23:59.700 "name": "BaseBdev4", 00:23:59.700 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:59.700 "is_configured": true, 00:23:59.700 "data_offset": 2048, 00:23:59.700 "data_size": 63488 00:23:59.700 } 00:23:59.700 ] 00:23:59.700 }' 00:23:59.700 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.700 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:59.700 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.958 12:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.958 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.958 "name": "raid_bdev1", 00:23:59.958 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:23:59.958 "strip_size_kb": 0, 00:23:59.958 "state": "online", 00:23:59.958 "raid_level": "raid1", 00:23:59.958 "superblock": true, 00:23:59.958 "num_base_bdevs": 4, 00:23:59.958 "num_base_bdevs_discovered": 3, 00:23:59.958 "num_base_bdevs_operational": 3, 00:23:59.958 "base_bdevs_list": [ 00:23:59.958 { 00:23:59.958 "name": "spare", 00:23:59.958 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:23:59.958 "is_configured": true, 00:23:59.958 "data_offset": 2048, 00:23:59.958 "data_size": 63488 00:23:59.958 }, 00:23:59.958 { 00:23:59.958 "name": null, 00:23:59.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.958 "is_configured": false, 00:23:59.958 "data_offset": 2048, 00:23:59.958 "data_size": 63488 00:23:59.958 }, 00:23:59.958 { 00:23:59.958 "name": "BaseBdev3", 00:23:59.958 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:23:59.958 "is_configured": true, 00:23:59.958 "data_offset": 2048, 00:23:59.958 "data_size": 63488 00:23:59.958 }, 00:23:59.958 { 00:23:59.958 "name": "BaseBdev4", 00:23:59.958 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:23:59.958 "is_configured": true, 00:23:59.958 "data_offset": 2048, 00:23:59.958 "data_size": 63488 00:23:59.958 } 00:23:59.958 ] 00:23:59.958 }' 00:23:59.958 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:00.215 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:00.215 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.216 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.474 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.474 "name": "raid_bdev1", 00:24:00.474 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:00.474 "strip_size_kb": 0, 00:24:00.474 "state": "online", 00:24:00.474 "raid_level": "raid1", 00:24:00.474 "superblock": true, 00:24:00.474 "num_base_bdevs": 4, 00:24:00.474 "num_base_bdevs_discovered": 3, 00:24:00.474 "num_base_bdevs_operational": 3, 00:24:00.474 "base_bdevs_list": [ 00:24:00.474 { 00:24:00.474 "name": "spare", 00:24:00.474 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:24:00.474 "is_configured": true, 00:24:00.474 "data_offset": 2048, 00:24:00.474 "data_size": 63488 00:24:00.474 }, 00:24:00.474 { 00:24:00.474 "name": null, 00:24:00.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.474 "is_configured": false, 00:24:00.474 "data_offset": 2048, 00:24:00.474 "data_size": 63488 00:24:00.474 }, 00:24:00.474 { 00:24:00.474 "name": "BaseBdev3", 00:24:00.474 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:00.474 "is_configured": true, 00:24:00.474 "data_offset": 2048, 00:24:00.474 "data_size": 63488 00:24:00.474 }, 00:24:00.474 { 00:24:00.474 "name": "BaseBdev4", 00:24:00.474 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:00.474 "is_configured": true, 00:24:00.474 "data_offset": 2048, 00:24:00.474 "data_size": 63488 00:24:00.474 } 00:24:00.474 ] 00:24:00.474 }' 00:24:00.474 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.474 12:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.039 12:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:01.297 [2024-07-25 12:05:47.159839] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.297 [2024-07-25 12:05:47.159863] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:01.297 [2024-07-25 12:05:47.159915] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.297 [2024-07-25 12:05:47.159981] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.297 [2024-07-25 12:05:47.159997] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3e5b0 name raid_bdev1, state offline 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:01.297 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:01.555 /dev/nbd0 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:01.555 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:01.555 1+0 records in 00:24:01.555 1+0 records out 00:24:01.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223182 s, 18.4 MB/s 00:24:01.813 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:01.813 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:01.814 /dev/nbd1 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:01.814 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:02.072 1+0 records in 00:24:02.072 1+0 records out 00:24:02.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291513 s, 14.1 MB/s 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:02.072 12:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:02.072 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:02.330 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:02.331 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:02.331 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:02.331 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:02.589 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:02.847 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:02.847 [2024-07-25 12:05:48.948329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:02.847 [2024-07-25 12:05:48.948373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.847 [2024-07-25 12:05:48.948392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ddb020 00:24:02.847 [2024-07-25 12:05:48.948403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.847 [2024-07-25 12:05:48.949918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.847 [2024-07-25 12:05:48.949944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:02.847 [2024-07-25 12:05:48.950016] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:02.847 [2024-07-25 12:05:48.950040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:02.847 [2024-07-25 12:05:48.950131] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:02.847 [2024-07-25 12:05:48.950208] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:02.847 spare 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.105 12:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.105 [2024-07-25 12:05:49.050519] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c3ee30 00:24:03.105 [2024-07-25 12:05:49.050533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:03.105 [2024-07-25 12:05:49.050713] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c40960 00:24:03.105 [2024-07-25 12:05:49.050847] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c3ee30 00:24:03.105 [2024-07-25 12:05:49.050857] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c3ee30 00:24:03.105 [2024-07-25 12:05:49.050950] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.105 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.105 "name": "raid_bdev1", 00:24:03.105 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:03.105 "strip_size_kb": 0, 00:24:03.105 "state": "online", 00:24:03.105 "raid_level": "raid1", 00:24:03.105 "superblock": true, 00:24:03.105 "num_base_bdevs": 4, 00:24:03.105 "num_base_bdevs_discovered": 3, 00:24:03.105 "num_base_bdevs_operational": 3, 00:24:03.105 "base_bdevs_list": [ 00:24:03.105 { 00:24:03.105 "name": "spare", 00:24:03.105 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:24:03.105 "is_configured": true, 00:24:03.105 "data_offset": 2048, 00:24:03.105 "data_size": 63488 00:24:03.105 }, 00:24:03.105 { 00:24:03.105 "name": null, 00:24:03.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.105 "is_configured": false, 00:24:03.105 "data_offset": 2048, 00:24:03.105 "data_size": 63488 00:24:03.105 }, 00:24:03.105 { 00:24:03.105 "name": "BaseBdev3", 00:24:03.105 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:03.105 "is_configured": true, 00:24:03.105 "data_offset": 2048, 00:24:03.105 "data_size": 63488 00:24:03.105 }, 00:24:03.105 { 00:24:03.105 "name": "BaseBdev4", 00:24:03.105 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:03.105 "is_configured": true, 00:24:03.105 "data_offset": 2048, 00:24:03.105 "data_size": 63488 00:24:03.105 } 00:24:03.105 ] 00:24:03.105 }' 00:24:03.105 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.105 12:05:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.671 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.671 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:03.671 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:03.671 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:03.671 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:03.928 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.928 12:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.928 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.928 "name": "raid_bdev1", 00:24:03.928 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:03.928 "strip_size_kb": 0, 00:24:03.928 "state": "online", 00:24:03.928 "raid_level": "raid1", 00:24:03.928 "superblock": true, 00:24:03.928 "num_base_bdevs": 4, 00:24:03.928 "num_base_bdevs_discovered": 3, 00:24:03.928 "num_base_bdevs_operational": 3, 00:24:03.928 "base_bdevs_list": [ 00:24:03.928 { 00:24:03.928 "name": "spare", 00:24:03.928 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:24:03.928 "is_configured": true, 00:24:03.928 "data_offset": 2048, 00:24:03.928 "data_size": 63488 00:24:03.928 }, 00:24:03.928 { 00:24:03.928 "name": null, 00:24:03.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.928 "is_configured": false, 00:24:03.928 "data_offset": 2048, 00:24:03.928 "data_size": 63488 00:24:03.928 }, 00:24:03.928 { 00:24:03.928 "name": "BaseBdev3", 00:24:03.928 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:03.928 "is_configured": true, 00:24:03.928 "data_offset": 2048, 00:24:03.928 "data_size": 63488 00:24:03.928 }, 00:24:03.928 { 00:24:03.928 "name": "BaseBdev4", 00:24:03.928 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:03.928 "is_configured": true, 00:24:03.928 "data_offset": 2048, 00:24:03.928 "data_size": 63488 00:24:03.928 } 00:24:03.928 ] 00:24:03.928 }' 00:24:03.928 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:04.186 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:04.186 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:04.186 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:04.186 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.186 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:04.445 [2024-07-25 12:05:50.524578] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.445 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.703 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.703 "name": "raid_bdev1", 00:24:04.703 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:04.703 "strip_size_kb": 0, 00:24:04.703 "state": "online", 00:24:04.703 "raid_level": "raid1", 00:24:04.703 "superblock": true, 00:24:04.703 "num_base_bdevs": 4, 00:24:04.703 "num_base_bdevs_discovered": 2, 00:24:04.703 "num_base_bdevs_operational": 2, 00:24:04.703 "base_bdevs_list": [ 00:24:04.703 { 00:24:04.703 "name": null, 00:24:04.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.703 "is_configured": false, 00:24:04.703 "data_offset": 2048, 00:24:04.703 "data_size": 63488 00:24:04.703 }, 00:24:04.703 { 00:24:04.703 "name": null, 00:24:04.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.703 "is_configured": false, 00:24:04.703 "data_offset": 2048, 00:24:04.703 "data_size": 63488 00:24:04.703 }, 00:24:04.703 { 00:24:04.703 "name": "BaseBdev3", 00:24:04.703 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:04.703 "is_configured": true, 00:24:04.703 "data_offset": 2048, 00:24:04.703 "data_size": 63488 00:24:04.703 }, 00:24:04.703 { 00:24:04.703 "name": "BaseBdev4", 00:24:04.703 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:04.703 "is_configured": true, 00:24:04.703 "data_offset": 2048, 00:24:04.703 "data_size": 63488 00:24:04.703 } 00:24:04.703 ] 00:24:04.703 }' 00:24:04.703 12:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.703 12:05:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.268 12:05:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:05.527 [2024-07-25 12:05:51.563331] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.527 [2024-07-25 12:05:51.563459] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:05.527 [2024-07-25 12:05:51.563474] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:05.527 [2024-07-25 12:05:51.563499] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.527 [2024-07-25 12:05:51.567241] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3c490 00:24:05.527 [2024-07-25 12:05:51.569379] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:05.527 12:05:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.902 "name": "raid_bdev1", 00:24:06.902 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:06.902 "strip_size_kb": 0, 00:24:06.902 "state": "online", 00:24:06.902 "raid_level": "raid1", 00:24:06.902 "superblock": true, 00:24:06.902 "num_base_bdevs": 4, 00:24:06.902 "num_base_bdevs_discovered": 3, 00:24:06.902 "num_base_bdevs_operational": 3, 00:24:06.902 "process": { 00:24:06.902 "type": "rebuild", 00:24:06.902 "target": "spare", 00:24:06.902 "progress": { 00:24:06.902 "blocks": 24576, 00:24:06.902 "percent": 38 00:24:06.902 } 00:24:06.902 }, 00:24:06.902 "base_bdevs_list": [ 00:24:06.902 { 00:24:06.902 "name": "spare", 00:24:06.902 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:24:06.902 "is_configured": true, 00:24:06.902 "data_offset": 2048, 00:24:06.902 "data_size": 63488 00:24:06.902 }, 00:24:06.902 { 00:24:06.902 "name": null, 00:24:06.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.902 "is_configured": false, 00:24:06.902 "data_offset": 2048, 00:24:06.902 "data_size": 63488 00:24:06.902 }, 00:24:06.902 { 00:24:06.902 "name": "BaseBdev3", 00:24:06.902 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:06.902 "is_configured": true, 00:24:06.902 "data_offset": 2048, 00:24:06.902 "data_size": 63488 00:24:06.902 }, 00:24:06.902 { 00:24:06.902 "name": "BaseBdev4", 00:24:06.902 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:06.902 "is_configured": true, 00:24:06.902 "data_offset": 2048, 00:24:06.902 "data_size": 63488 00:24:06.902 } 00:24:06.902 ] 00:24:06.902 }' 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.902 12:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:07.161 [2024-07-25 12:05:53.127273] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.161 [2024-07-25 12:05:53.181050] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:07.161 [2024-07-25 12:05:53.181091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.161 [2024-07-25 12:05:53.181106] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.161 [2024-07-25 12:05:53.181114] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.161 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.419 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.419 "name": "raid_bdev1", 00:24:07.419 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:07.419 "strip_size_kb": 0, 00:24:07.419 "state": "online", 00:24:07.419 "raid_level": "raid1", 00:24:07.419 "superblock": true, 00:24:07.419 "num_base_bdevs": 4, 00:24:07.419 "num_base_bdevs_discovered": 2, 00:24:07.419 "num_base_bdevs_operational": 2, 00:24:07.419 "base_bdevs_list": [ 00:24:07.419 { 00:24:07.419 "name": null, 00:24:07.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.419 "is_configured": false, 00:24:07.419 "data_offset": 2048, 00:24:07.419 "data_size": 63488 00:24:07.419 }, 00:24:07.419 { 00:24:07.419 "name": null, 00:24:07.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.419 "is_configured": false, 00:24:07.419 "data_offset": 2048, 00:24:07.419 "data_size": 63488 00:24:07.419 }, 00:24:07.419 { 00:24:07.419 "name": "BaseBdev3", 00:24:07.419 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:07.419 "is_configured": true, 00:24:07.419 "data_offset": 2048, 00:24:07.419 "data_size": 63488 00:24:07.419 }, 00:24:07.419 { 00:24:07.419 "name": "BaseBdev4", 00:24:07.419 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:07.419 "is_configured": true, 00:24:07.419 "data_offset": 2048, 00:24:07.419 "data_size": 63488 00:24:07.419 } 00:24:07.419 ] 00:24:07.419 }' 00:24:07.419 12:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.419 12:05:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.986 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:08.244 [2024-07-25 12:05:54.219524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:08.244 [2024-07-25 12:05:54.219568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.244 [2024-07-25 12:05:54.219588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c3a2e0 00:24:08.244 [2024-07-25 12:05:54.219600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.244 [2024-07-25 12:05:54.219942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.244 [2024-07-25 12:05:54.219958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:08.244 [2024-07-25 12:05:54.220028] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:08.244 [2024-07-25 12:05:54.220038] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:08.244 [2024-07-25 12:05:54.220049] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:08.244 [2024-07-25 12:05:54.220066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.244 [2024-07-25 12:05:54.223835] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3f4e0 00:24:08.244 spare 00:24:08.244 [2024-07-25 12:05:54.225207] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.244 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.179 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.438 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.438 "name": "raid_bdev1", 00:24:09.438 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:09.438 "strip_size_kb": 0, 00:24:09.438 "state": "online", 00:24:09.438 "raid_level": "raid1", 00:24:09.438 "superblock": true, 00:24:09.438 "num_base_bdevs": 4, 00:24:09.438 "num_base_bdevs_discovered": 3, 00:24:09.438 "num_base_bdevs_operational": 3, 00:24:09.438 "process": { 00:24:09.438 "type": "rebuild", 00:24:09.438 "target": "spare", 00:24:09.438 "progress": { 00:24:09.438 "blocks": 24576, 00:24:09.438 "percent": 38 00:24:09.438 } 00:24:09.438 }, 00:24:09.438 "base_bdevs_list": [ 00:24:09.438 { 00:24:09.438 "name": "spare", 00:24:09.438 "uuid": "bcc928a8-f5d1-5a52-a65a-ddb0c47d9370", 00:24:09.438 "is_configured": true, 00:24:09.438 "data_offset": 2048, 00:24:09.438 "data_size": 63488 00:24:09.438 }, 00:24:09.438 { 00:24:09.438 "name": null, 00:24:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.438 "is_configured": false, 00:24:09.438 "data_offset": 2048, 00:24:09.438 "data_size": 63488 00:24:09.438 }, 00:24:09.438 { 00:24:09.438 "name": "BaseBdev3", 00:24:09.438 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:09.438 "is_configured": true, 00:24:09.438 "data_offset": 2048, 00:24:09.438 "data_size": 63488 00:24:09.438 }, 00:24:09.438 { 00:24:09.438 "name": "BaseBdev4", 00:24:09.438 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:09.438 "is_configured": true, 00:24:09.438 "data_offset": 2048, 00:24:09.438 "data_size": 63488 00:24:09.438 } 00:24:09.438 ] 00:24:09.438 }' 00:24:09.438 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.438 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.438 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.697 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.697 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:09.697 [2024-07-25 12:05:55.764679] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.000 [2024-07-25 12:05:55.836836] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:10.000 [2024-07-25 12:05:55.836878] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.000 [2024-07-25 12:05:55.836893] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.000 [2024-07-25 12:05:55.836900] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.001 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.001 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.001 "name": "raid_bdev1", 00:24:10.001 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:10.001 "strip_size_kb": 0, 00:24:10.001 "state": "online", 00:24:10.001 "raid_level": "raid1", 00:24:10.001 "superblock": true, 00:24:10.001 "num_base_bdevs": 4, 00:24:10.001 "num_base_bdevs_discovered": 2, 00:24:10.001 "num_base_bdevs_operational": 2, 00:24:10.001 "base_bdevs_list": [ 00:24:10.001 { 00:24:10.001 "name": null, 00:24:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.001 "is_configured": false, 00:24:10.001 "data_offset": 2048, 00:24:10.001 "data_size": 63488 00:24:10.001 }, 00:24:10.001 { 00:24:10.001 "name": null, 00:24:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.001 "is_configured": false, 00:24:10.001 "data_offset": 2048, 00:24:10.001 "data_size": 63488 00:24:10.001 }, 00:24:10.001 { 00:24:10.001 "name": "BaseBdev3", 00:24:10.001 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:10.001 "is_configured": true, 00:24:10.001 "data_offset": 2048, 00:24:10.001 "data_size": 63488 00:24:10.001 }, 00:24:10.001 { 00:24:10.001 "name": "BaseBdev4", 00:24:10.001 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:10.001 "is_configured": true, 00:24:10.001 "data_offset": 2048, 00:24:10.001 "data_size": 63488 00:24:10.001 } 00:24:10.001 ] 00:24:10.001 }' 00:24:10.001 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.001 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.567 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.825 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:10.825 "name": "raid_bdev1", 00:24:10.826 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:10.826 "strip_size_kb": 0, 00:24:10.826 "state": "online", 00:24:10.826 "raid_level": "raid1", 00:24:10.826 "superblock": true, 00:24:10.826 "num_base_bdevs": 4, 00:24:10.826 "num_base_bdevs_discovered": 2, 00:24:10.826 "num_base_bdevs_operational": 2, 00:24:10.826 "base_bdevs_list": [ 00:24:10.826 { 00:24:10.826 "name": null, 00:24:10.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.826 "is_configured": false, 00:24:10.826 "data_offset": 2048, 00:24:10.826 "data_size": 63488 00:24:10.826 }, 00:24:10.826 { 00:24:10.826 "name": null, 00:24:10.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.826 "is_configured": false, 00:24:10.826 "data_offset": 2048, 00:24:10.826 "data_size": 63488 00:24:10.826 }, 00:24:10.826 { 00:24:10.826 "name": "BaseBdev3", 00:24:10.826 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:10.826 "is_configured": true, 00:24:10.826 "data_offset": 2048, 00:24:10.826 "data_size": 63488 00:24:10.826 }, 00:24:10.826 { 00:24:10.826 "name": "BaseBdev4", 00:24:10.826 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:10.826 "is_configured": true, 00:24:10.826 "data_offset": 2048, 00:24:10.826 "data_size": 63488 00:24:10.826 } 00:24:10.826 ] 00:24:10.826 }' 00:24:10.826 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:10.826 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:10.826 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:11.084 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:11.084 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:11.084 12:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:11.342 [2024-07-25 12:05:57.393071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:11.342 [2024-07-25 12:05:57.393112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.342 [2024-07-25 12:05:57.393131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c3f2a0 00:24:11.342 [2024-07-25 12:05:57.393151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.342 [2024-07-25 12:05:57.393463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.342 [2024-07-25 12:05:57.393479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.343 [2024-07-25 12:05:57.393534] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:11.343 [2024-07-25 12:05:57.393545] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:11.343 [2024-07-25 12:05:57.393554] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:11.343 BaseBdev1 00:24:11.343 12:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.720 "name": "raid_bdev1", 00:24:12.720 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:12.720 "strip_size_kb": 0, 00:24:12.720 "state": "online", 00:24:12.720 "raid_level": "raid1", 00:24:12.720 "superblock": true, 00:24:12.720 "num_base_bdevs": 4, 00:24:12.720 "num_base_bdevs_discovered": 2, 00:24:12.720 "num_base_bdevs_operational": 2, 00:24:12.720 "base_bdevs_list": [ 00:24:12.720 { 00:24:12.720 "name": null, 00:24:12.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.720 "is_configured": false, 00:24:12.720 "data_offset": 2048, 00:24:12.720 "data_size": 63488 00:24:12.720 }, 00:24:12.720 { 00:24:12.720 "name": null, 00:24:12.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.720 "is_configured": false, 00:24:12.720 "data_offset": 2048, 00:24:12.720 "data_size": 63488 00:24:12.720 }, 00:24:12.720 { 00:24:12.720 "name": "BaseBdev3", 00:24:12.720 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:12.720 "is_configured": true, 00:24:12.720 "data_offset": 2048, 00:24:12.720 "data_size": 63488 00:24:12.720 }, 00:24:12.720 { 00:24:12.720 "name": "BaseBdev4", 00:24:12.720 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:12.720 "is_configured": true, 00:24:12.720 "data_offset": 2048, 00:24:12.720 "data_size": 63488 00:24:12.720 } 00:24:12.720 ] 00:24:12.720 }' 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.720 12:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.287 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:13.545 "name": "raid_bdev1", 00:24:13.545 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:13.545 "strip_size_kb": 0, 00:24:13.545 "state": "online", 00:24:13.545 "raid_level": "raid1", 00:24:13.545 "superblock": true, 00:24:13.545 "num_base_bdevs": 4, 00:24:13.545 "num_base_bdevs_discovered": 2, 00:24:13.545 "num_base_bdevs_operational": 2, 00:24:13.545 "base_bdevs_list": [ 00:24:13.545 { 00:24:13.545 "name": null, 00:24:13.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.545 "is_configured": false, 00:24:13.545 "data_offset": 2048, 00:24:13.545 "data_size": 63488 00:24:13.545 }, 00:24:13.545 { 00:24:13.545 "name": null, 00:24:13.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.545 "is_configured": false, 00:24:13.545 "data_offset": 2048, 00:24:13.545 "data_size": 63488 00:24:13.545 }, 00:24:13.545 { 00:24:13.545 "name": "BaseBdev3", 00:24:13.545 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:13.545 "is_configured": true, 00:24:13.545 "data_offset": 2048, 00:24:13.545 "data_size": 63488 00:24:13.545 }, 00:24:13.545 { 00:24:13.545 "name": "BaseBdev4", 00:24:13.545 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:13.545 "is_configured": true, 00:24:13.545 "data_offset": 2048, 00:24:13.545 "data_size": 63488 00:24:13.545 } 00:24:13.545 ] 00:24:13.545 }' 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:13.545 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.804 [2024-07-25 12:05:59.771362] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.804 [2024-07-25 12:05:59.771466] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:13.804 [2024-07-25 12:05:59.771481] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:13.804 request: 00:24:13.804 { 00:24:13.804 "base_bdev": "BaseBdev1", 00:24:13.804 "raid_bdev": "raid_bdev1", 00:24:13.804 "method": "bdev_raid_add_base_bdev", 00:24:13.804 "req_id": 1 00:24:13.804 } 00:24:13.804 Got JSON-RPC error response 00:24:13.804 response: 00:24:13.804 { 00:24:13.804 "code": -22, 00:24:13.804 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:13.804 } 00:24:13.804 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:24:13.804 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:13.804 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:13.804 12:05:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:13.804 12:05:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.740 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.998 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.998 "name": "raid_bdev1", 00:24:14.998 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:14.998 "strip_size_kb": 0, 00:24:14.998 "state": "online", 00:24:14.998 "raid_level": "raid1", 00:24:14.998 "superblock": true, 00:24:14.998 "num_base_bdevs": 4, 00:24:14.998 "num_base_bdevs_discovered": 2, 00:24:14.998 "num_base_bdevs_operational": 2, 00:24:14.998 "base_bdevs_list": [ 00:24:14.998 { 00:24:14.998 "name": null, 00:24:14.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.998 "is_configured": false, 00:24:14.998 "data_offset": 2048, 00:24:14.998 "data_size": 63488 00:24:14.998 }, 00:24:14.998 { 00:24:14.998 "name": null, 00:24:14.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.998 "is_configured": false, 00:24:14.998 "data_offset": 2048, 00:24:14.998 "data_size": 63488 00:24:14.998 }, 00:24:14.998 { 00:24:14.998 "name": "BaseBdev3", 00:24:14.998 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:14.998 "is_configured": true, 00:24:14.998 "data_offset": 2048, 00:24:14.998 "data_size": 63488 00:24:14.998 }, 00:24:14.998 { 00:24:14.998 "name": "BaseBdev4", 00:24:14.998 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:14.998 "is_configured": true, 00:24:14.998 "data_offset": 2048, 00:24:14.998 "data_size": 63488 00:24:14.998 } 00:24:14.999 ] 00:24:14.999 }' 00:24:14.999 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.999 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.566 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.825 "name": "raid_bdev1", 00:24:15.825 "uuid": "ade5a14e-36cc-4c8a-9bbc-a8e824198d1d", 00:24:15.825 "strip_size_kb": 0, 00:24:15.825 "state": "online", 00:24:15.825 "raid_level": "raid1", 00:24:15.825 "superblock": true, 00:24:15.825 "num_base_bdevs": 4, 00:24:15.825 "num_base_bdevs_discovered": 2, 00:24:15.825 "num_base_bdevs_operational": 2, 00:24:15.825 "base_bdevs_list": [ 00:24:15.825 { 00:24:15.825 "name": null, 00:24:15.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.825 "is_configured": false, 00:24:15.825 "data_offset": 2048, 00:24:15.825 "data_size": 63488 00:24:15.825 }, 00:24:15.825 { 00:24:15.825 "name": null, 00:24:15.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.825 "is_configured": false, 00:24:15.825 "data_offset": 2048, 00:24:15.825 "data_size": 63488 00:24:15.825 }, 00:24:15.825 { 00:24:15.825 "name": "BaseBdev3", 00:24:15.825 "uuid": "cf79d029-0b31-5eff-a8f8-be08cebfcf50", 00:24:15.825 "is_configured": true, 00:24:15.825 "data_offset": 2048, 00:24:15.825 "data_size": 63488 00:24:15.825 }, 00:24:15.825 { 00:24:15.825 "name": "BaseBdev4", 00:24:15.825 "uuid": "5fa135bc-8faf-5f02-8fb6-0d2b0c8fb481", 00:24:15.825 "is_configured": true, 00:24:15.825 "data_offset": 2048, 00:24:15.825 "data_size": 63488 00:24:15.825 } 00:24:15.825 ] 00:24:15.825 }' 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 48335 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 48335 ']' 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 48335 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.825 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 48335 00:24:16.084 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.084 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.084 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 48335' 00:24:16.084 killing process with pid 48335 00:24:16.084 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 48335 00:24:16.084 Received shutdown signal, test time was about 60.000000 seconds 00:24:16.084 00:24:16.084 Latency(us) 00:24:16.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.084 =================================================================================================================== 00:24:16.084 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:16.084 [2024-07-25 12:06:01.986199] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.084 [2024-07-25 12:06:01.986282] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.084 [2024-07-25 12:06:01.986332] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.084 [2024-07-25 12:06:01.986343] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3ee30 name raid_bdev1, state offline 00:24:16.084 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 48335 00:24:16.084 [2024-07-25 12:06:02.025809] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:24:16.343 00:24:16.343 real 0m35.280s 00:24:16.343 user 0m51.886s 00:24:16.343 sys 0m6.045s 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.343 ************************************ 00:24:16.343 END TEST raid_rebuild_test_sb 00:24:16.343 ************************************ 00:24:16.343 12:06:02 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:16.343 12:06:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:16.343 12:06:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:16.343 12:06:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:16.343 ************************************ 00:24:16.343 START TEST raid_rebuild_test_io 00:24:16.343 ************************************ 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:16.343 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=54735 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 54735 /var/tmp/spdk-raid.sock 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 54735 ']' 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:16.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.344 12:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:16.344 [2024-07-25 12:06:02.372070] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:24:16.344 [2024-07-25 12:06:02.372125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54735 ] 00:24:16.344 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:16.344 Zero copy mechanism will not be used. 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:16.344 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.344 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:16.603 [2024-07-25 12:06:02.503904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.603 [2024-07-25 12:06:02.591586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.603 [2024-07-25 12:06:02.658589] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:16.603 [2024-07-25 12:06:02.658617] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:17.171 12:06:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.171 12:06:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:24:17.171 12:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:17.171 12:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:17.435 BaseBdev1_malloc 00:24:17.435 12:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:17.694 [2024-07-25 12:06:03.711088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:17.694 [2024-07-25 12:06:03.711131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.694 [2024-07-25 12:06:03.711157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10bf5f0 00:24:17.694 [2024-07-25 12:06:03.711169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.694 [2024-07-25 12:06:03.712724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.694 [2024-07-25 12:06:03.712750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:17.694 BaseBdev1 00:24:17.694 12:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:17.694 12:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:17.953 BaseBdev2_malloc 00:24:17.953 12:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:18.211 [2024-07-25 12:06:04.160804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:18.211 [2024-07-25 12:06:04.160843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.211 [2024-07-25 12:06:04.160860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1263130 00:24:18.211 [2024-07-25 12:06:04.160871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.211 [2024-07-25 12:06:04.162242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.211 [2024-07-25 12:06:04.162268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:18.212 BaseBdev2 00:24:18.212 12:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:18.212 12:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:18.470 BaseBdev3_malloc 00:24:18.470 12:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:18.729 [2024-07-25 12:06:04.618379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:18.729 [2024-07-25 12:06:04.618418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.729 [2024-07-25 12:06:04.618435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1259420 00:24:18.729 [2024-07-25 12:06:04.618446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.729 [2024-07-25 12:06:04.619797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.729 [2024-07-25 12:06:04.619823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:18.729 BaseBdev3 00:24:18.729 12:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:18.729 12:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:18.988 BaseBdev4_malloc 00:24:18.988 12:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:18.988 [2024-07-25 12:06:05.079822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:18.988 [2024-07-25 12:06:05.079863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.988 [2024-07-25 12:06:05.079880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1259d40 00:24:18.988 [2024-07-25 12:06:05.079891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.988 [2024-07-25 12:06:05.081252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.988 [2024-07-25 12:06:05.081277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:18.988 BaseBdev4 00:24:18.988 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:19.247 spare_malloc 00:24:19.247 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:19.506 spare_delay 00:24:19.506 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:19.764 [2024-07-25 12:06:05.749735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:19.764 [2024-07-25 12:06:05.749774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.764 [2024-07-25 12:06:05.749793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b8db0 00:24:19.764 [2024-07-25 12:06:05.749805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.764 [2024-07-25 12:06:05.751215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.764 [2024-07-25 12:06:05.751241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:19.764 spare 00:24:19.764 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:20.023 [2024-07-25 12:06:05.974357] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:20.023 [2024-07-25 12:06:05.975509] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.023 [2024-07-25 12:06:05.975558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:20.023 [2024-07-25 12:06:05.975598] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:20.023 [2024-07-25 12:06:05.975673] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10bb5b0 00:24:20.023 [2024-07-25 12:06:05.975682] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:20.023 [2024-07-25 12:06:05.975876] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10be380 00:24:20.023 [2024-07-25 12:06:05.976018] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10bb5b0 00:24:20.023 [2024-07-25 12:06:05.976027] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10bb5b0 00:24:20.023 [2024-07-25 12:06:05.976131] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.023 12:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.282 12:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.282 "name": "raid_bdev1", 00:24:20.282 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:20.282 "strip_size_kb": 0, 00:24:20.282 "state": "online", 00:24:20.282 "raid_level": "raid1", 00:24:20.282 "superblock": false, 00:24:20.282 "num_base_bdevs": 4, 00:24:20.282 "num_base_bdevs_discovered": 4, 00:24:20.282 "num_base_bdevs_operational": 4, 00:24:20.282 "base_bdevs_list": [ 00:24:20.282 { 00:24:20.282 "name": "BaseBdev1", 00:24:20.282 "uuid": "387fea00-44dd-5f28-88c0-cc50bc45388e", 00:24:20.282 "is_configured": true, 00:24:20.282 "data_offset": 0, 00:24:20.282 "data_size": 65536 00:24:20.282 }, 00:24:20.282 { 00:24:20.282 "name": "BaseBdev2", 00:24:20.282 "uuid": "6e094e08-cd27-5ca2-b9d1-3fa25e62beae", 00:24:20.282 "is_configured": true, 00:24:20.282 "data_offset": 0, 00:24:20.282 "data_size": 65536 00:24:20.282 }, 00:24:20.282 { 00:24:20.282 "name": "BaseBdev3", 00:24:20.282 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:20.282 "is_configured": true, 00:24:20.282 "data_offset": 0, 00:24:20.282 "data_size": 65536 00:24:20.282 }, 00:24:20.282 { 00:24:20.282 "name": "BaseBdev4", 00:24:20.282 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:20.282 "is_configured": true, 00:24:20.282 "data_offset": 0, 00:24:20.282 "data_size": 65536 00:24:20.282 } 00:24:20.282 ] 00:24:20.282 }' 00:24:20.282 12:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.282 12:06:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:20.849 12:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:20.849 12:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:21.107 [2024-07-25 12:06:07.005333] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:21.107 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:24:21.107 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.107 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:21.366 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:24:21.366 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:21.366 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:21.366 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:21.366 [2024-07-25 12:06:07.355965] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10be8e0 00:24:21.366 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:21.366 Zero copy mechanism will not be used. 00:24:21.366 Running I/O for 60 seconds... 00:24:21.366 [2024-07-25 12:06:07.466198] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:21.366 [2024-07-25 12:06:07.466370] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x10be8e0 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.626 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.885 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.885 "name": "raid_bdev1", 00:24:21.885 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:21.885 "strip_size_kb": 0, 00:24:21.885 "state": "online", 00:24:21.885 "raid_level": "raid1", 00:24:21.885 "superblock": false, 00:24:21.885 "num_base_bdevs": 4, 00:24:21.885 "num_base_bdevs_discovered": 3, 00:24:21.885 "num_base_bdevs_operational": 3, 00:24:21.885 "base_bdevs_list": [ 00:24:21.885 { 00:24:21.885 "name": null, 00:24:21.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.885 "is_configured": false, 00:24:21.885 "data_offset": 0, 00:24:21.885 "data_size": 65536 00:24:21.885 }, 00:24:21.885 { 00:24:21.885 "name": "BaseBdev2", 00:24:21.886 "uuid": "6e094e08-cd27-5ca2-b9d1-3fa25e62beae", 00:24:21.886 "is_configured": true, 00:24:21.886 "data_offset": 0, 00:24:21.886 "data_size": 65536 00:24:21.886 }, 00:24:21.886 { 00:24:21.886 "name": "BaseBdev3", 00:24:21.886 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:21.886 "is_configured": true, 00:24:21.886 "data_offset": 0, 00:24:21.886 "data_size": 65536 00:24:21.886 }, 00:24:21.886 { 00:24:21.886 "name": "BaseBdev4", 00:24:21.886 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:21.886 "is_configured": true, 00:24:21.886 "data_offset": 0, 00:24:21.886 "data_size": 65536 00:24:21.886 } 00:24:21.886 ] 00:24:21.886 }' 00:24:21.886 12:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.886 12:06:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:22.454 12:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:22.454 [2024-07-25 12:06:08.566112] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:22.713 12:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:22.713 [2024-07-25 12:06:08.633663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdbdb40 00:24:22.713 [2024-07-25 12:06:08.635863] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:22.713 [2024-07-25 12:06:08.756030] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:22.713 [2024-07-25 12:06:08.756319] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:22.972 [2024-07-25 12:06:08.959304] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:22.972 [2024-07-25 12:06:08.959433] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:23.232 [2024-07-25 12:06:09.236166] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:23.492 [2024-07-25 12:06:09.448096] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:23.492 [2024-07-25 12:06:09.448611] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.751 [2024-07-25 12:06:09.781347] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:23.751 [2024-07-25 12:06:09.781567] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:23.751 "name": "raid_bdev1", 00:24:23.751 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:23.751 "strip_size_kb": 0, 00:24:23.751 "state": "online", 00:24:23.751 "raid_level": "raid1", 00:24:23.751 "superblock": false, 00:24:23.751 "num_base_bdevs": 4, 00:24:23.751 "num_base_bdevs_discovered": 4, 00:24:23.751 "num_base_bdevs_operational": 4, 00:24:23.751 "process": { 00:24:23.751 "type": "rebuild", 00:24:23.751 "target": "spare", 00:24:23.751 "progress": { 00:24:23.751 "blocks": 14336, 00:24:23.751 "percent": 21 00:24:23.751 } 00:24:23.751 }, 00:24:23.751 "base_bdevs_list": [ 00:24:23.751 { 00:24:23.751 "name": "spare", 00:24:23.751 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:23.751 "is_configured": true, 00:24:23.751 "data_offset": 0, 00:24:23.751 "data_size": 65536 00:24:23.751 }, 00:24:23.751 { 00:24:23.751 "name": "BaseBdev2", 00:24:23.751 "uuid": "6e094e08-cd27-5ca2-b9d1-3fa25e62beae", 00:24:23.751 "is_configured": true, 00:24:23.751 "data_offset": 0, 00:24:23.751 "data_size": 65536 00:24:23.751 }, 00:24:23.751 { 00:24:23.751 "name": "BaseBdev3", 00:24:23.751 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:23.751 "is_configured": true, 00:24:23.751 "data_offset": 0, 00:24:23.751 "data_size": 65536 00:24:23.751 }, 00:24:23.751 { 00:24:23.751 "name": "BaseBdev4", 00:24:23.751 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:23.751 "is_configured": true, 00:24:23.751 "data_offset": 0, 00:24:23.751 "data_size": 65536 00:24:23.751 } 00:24:23.751 ] 00:24:23.751 }' 00:24:23.751 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:24.011 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.011 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:24.011 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.011 12:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:24.011 [2024-07-25 12:06:10.003692] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:24.270 [2024-07-25 12:06:10.159525] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.270 [2024-07-25 12:06:10.259779] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:24.270 [2024-07-25 12:06:10.271114] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.270 [2024-07-25 12:06:10.271150] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.270 [2024-07-25 12:06:10.271160] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:24.270 [2024-07-25 12:06:10.292211] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x10be8e0 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.270 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.545 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.545 "name": "raid_bdev1", 00:24:24.545 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:24.545 "strip_size_kb": 0, 00:24:24.545 "state": "online", 00:24:24.545 "raid_level": "raid1", 00:24:24.545 "superblock": false, 00:24:24.545 "num_base_bdevs": 4, 00:24:24.545 "num_base_bdevs_discovered": 3, 00:24:24.545 "num_base_bdevs_operational": 3, 00:24:24.545 "base_bdevs_list": [ 00:24:24.545 { 00:24:24.545 "name": null, 00:24:24.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.545 "is_configured": false, 00:24:24.545 "data_offset": 0, 00:24:24.545 "data_size": 65536 00:24:24.545 }, 00:24:24.545 { 00:24:24.545 "name": "BaseBdev2", 00:24:24.545 "uuid": "6e094e08-cd27-5ca2-b9d1-3fa25e62beae", 00:24:24.545 "is_configured": true, 00:24:24.545 "data_offset": 0, 00:24:24.545 "data_size": 65536 00:24:24.545 }, 00:24:24.545 { 00:24:24.545 "name": "BaseBdev3", 00:24:24.545 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:24.545 "is_configured": true, 00:24:24.545 "data_offset": 0, 00:24:24.545 "data_size": 65536 00:24:24.545 }, 00:24:24.545 { 00:24:24.545 "name": "BaseBdev4", 00:24:24.545 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:24.545 "is_configured": true, 00:24:24.545 "data_offset": 0, 00:24:24.545 "data_size": 65536 00:24:24.545 } 00:24:24.545 ] 00:24:24.545 }' 00:24:24.545 12:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.545 12:06:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.124 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.383 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.383 "name": "raid_bdev1", 00:24:25.383 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:25.383 "strip_size_kb": 0, 00:24:25.383 "state": "online", 00:24:25.383 "raid_level": "raid1", 00:24:25.383 "superblock": false, 00:24:25.383 "num_base_bdevs": 4, 00:24:25.383 "num_base_bdevs_discovered": 3, 00:24:25.383 "num_base_bdevs_operational": 3, 00:24:25.383 "base_bdevs_list": [ 00:24:25.383 { 00:24:25.383 "name": null, 00:24:25.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.383 "is_configured": false, 00:24:25.383 "data_offset": 0, 00:24:25.383 "data_size": 65536 00:24:25.383 }, 00:24:25.383 { 00:24:25.383 "name": "BaseBdev2", 00:24:25.383 "uuid": "6e094e08-cd27-5ca2-b9d1-3fa25e62beae", 00:24:25.383 "is_configured": true, 00:24:25.383 "data_offset": 0, 00:24:25.383 "data_size": 65536 00:24:25.383 }, 00:24:25.383 { 00:24:25.383 "name": "BaseBdev3", 00:24:25.383 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:25.383 "is_configured": true, 00:24:25.383 "data_offset": 0, 00:24:25.383 "data_size": 65536 00:24:25.383 }, 00:24:25.383 { 00:24:25.383 "name": "BaseBdev4", 00:24:25.383 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:25.383 "is_configured": true, 00:24:25.383 "data_offset": 0, 00:24:25.383 "data_size": 65536 00:24:25.383 } 00:24:25.383 ] 00:24:25.383 }' 00:24:25.383 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:25.383 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:25.383 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:25.642 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:25.642 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:25.642 [2024-07-25 12:06:11.726219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:25.901 12:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:25.901 [2024-07-25 12:06:11.812045] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11566b0 00:24:25.901 [2024-07-25 12:06:11.813457] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:25.901 [2024-07-25 12:06:11.921692] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:25.901 [2024-07-25 12:06:11.922795] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:26.159 [2024-07-25 12:06:12.125772] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:26.159 [2024-07-25 12:06:12.126192] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:26.726 [2024-07-25 12:06:12.631159] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.726 12:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.985 [2024-07-25 12:06:12.875124] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:26.985 [2024-07-25 12:06:12.875599] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:26.985 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:26.985 "name": "raid_bdev1", 00:24:26.985 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:26.985 "strip_size_kb": 0, 00:24:26.985 "state": "online", 00:24:26.985 "raid_level": "raid1", 00:24:26.985 "superblock": false, 00:24:26.985 "num_base_bdevs": 4, 00:24:26.985 "num_base_bdevs_discovered": 4, 00:24:26.985 "num_base_bdevs_operational": 4, 00:24:26.985 "process": { 00:24:26.985 "type": "rebuild", 00:24:26.985 "target": "spare", 00:24:26.985 "progress": { 00:24:26.985 "blocks": 14336, 00:24:26.985 "percent": 21 00:24:26.985 } 00:24:26.985 }, 00:24:26.985 "base_bdevs_list": [ 00:24:26.985 { 00:24:26.985 "name": "spare", 00:24:26.985 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:26.985 "is_configured": true, 00:24:26.985 "data_offset": 0, 00:24:26.985 "data_size": 65536 00:24:26.985 }, 00:24:26.985 { 00:24:26.985 "name": "BaseBdev2", 00:24:26.985 "uuid": "6e094e08-cd27-5ca2-b9d1-3fa25e62beae", 00:24:26.985 "is_configured": true, 00:24:26.985 "data_offset": 0, 00:24:26.985 "data_size": 65536 00:24:26.985 }, 00:24:26.985 { 00:24:26.985 "name": "BaseBdev3", 00:24:26.985 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:26.985 "is_configured": true, 00:24:26.985 "data_offset": 0, 00:24:26.985 "data_size": 65536 00:24:26.985 }, 00:24:26.985 { 00:24:26.985 "name": "BaseBdev4", 00:24:26.985 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:26.985 "is_configured": true, 00:24:26.985 "data_offset": 0, 00:24:26.985 "data_size": 65536 00:24:26.985 } 00:24:26.985 ] 00:24:26.985 }' 00:24:26.985 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:26.985 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:26.985 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:26.985 [2024-07-25 12:06:13.096511] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:27.244 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.244 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:24:27.244 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:27.244 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:27.244 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:27.244 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:27.244 [2024-07-25 12:06:13.339584] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:27.244 [2024-07-25 12:06:13.347871] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:27.244 [2024-07-25 12:06:13.355749] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x10be8e0 00:24:27.244 [2024-07-25 12:06:13.355772] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x11566b0 00:24:27.502 [2024-07-25 12:06:13.365109] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.502 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.502 [2024-07-25 12:06:13.476098] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:27.761 "name": "raid_bdev1", 00:24:27.761 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:27.761 "strip_size_kb": 0, 00:24:27.761 "state": "online", 00:24:27.761 "raid_level": "raid1", 00:24:27.761 "superblock": false, 00:24:27.761 "num_base_bdevs": 4, 00:24:27.761 "num_base_bdevs_discovered": 3, 00:24:27.761 "num_base_bdevs_operational": 3, 00:24:27.761 "process": { 00:24:27.761 "type": "rebuild", 00:24:27.761 "target": "spare", 00:24:27.761 "progress": { 00:24:27.761 "blocks": 24576, 00:24:27.761 "percent": 37 00:24:27.761 } 00:24:27.761 }, 00:24:27.761 "base_bdevs_list": [ 00:24:27.761 { 00:24:27.761 "name": "spare", 00:24:27.761 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:27.761 "is_configured": true, 00:24:27.761 "data_offset": 0, 00:24:27.761 "data_size": 65536 00:24:27.761 }, 00:24:27.761 { 00:24:27.761 "name": null, 00:24:27.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.761 "is_configured": false, 00:24:27.761 "data_offset": 0, 00:24:27.761 "data_size": 65536 00:24:27.761 }, 00:24:27.761 { 00:24:27.761 "name": "BaseBdev3", 00:24:27.761 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:27.761 "is_configured": true, 00:24:27.761 "data_offset": 0, 00:24:27.761 "data_size": 65536 00:24:27.761 }, 00:24:27.761 { 00:24:27.761 "name": "BaseBdev4", 00:24:27.761 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:27.761 "is_configured": true, 00:24:27.761 "data_offset": 0, 00:24:27.761 "data_size": 65536 00:24:27.761 } 00:24:27.761 ] 00:24:27.761 }' 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:27.761 [2024-07-25 12:06:13.695451] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=878 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.761 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.020 [2024-07-25 12:06:13.907058] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:28.020 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.020 "name": "raid_bdev1", 00:24:28.020 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:28.020 "strip_size_kb": 0, 00:24:28.020 "state": "online", 00:24:28.020 "raid_level": "raid1", 00:24:28.020 "superblock": false, 00:24:28.020 "num_base_bdevs": 4, 00:24:28.020 "num_base_bdevs_discovered": 3, 00:24:28.020 "num_base_bdevs_operational": 3, 00:24:28.020 "process": { 00:24:28.020 "type": "rebuild", 00:24:28.020 "target": "spare", 00:24:28.020 "progress": { 00:24:28.020 "blocks": 28672, 00:24:28.020 "percent": 43 00:24:28.020 } 00:24:28.020 }, 00:24:28.020 "base_bdevs_list": [ 00:24:28.020 { 00:24:28.020 "name": "spare", 00:24:28.020 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:28.020 "is_configured": true, 00:24:28.020 "data_offset": 0, 00:24:28.020 "data_size": 65536 00:24:28.020 }, 00:24:28.020 { 00:24:28.020 "name": null, 00:24:28.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.020 "is_configured": false, 00:24:28.020 "data_offset": 0, 00:24:28.020 "data_size": 65536 00:24:28.020 }, 00:24:28.020 { 00:24:28.020 "name": "BaseBdev3", 00:24:28.020 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:28.020 "is_configured": true, 00:24:28.020 "data_offset": 0, 00:24:28.020 "data_size": 65536 00:24:28.020 }, 00:24:28.020 { 00:24:28.020 "name": "BaseBdev4", 00:24:28.020 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:28.020 "is_configured": true, 00:24:28.020 "data_offset": 0, 00:24:28.020 "data_size": 65536 00:24:28.020 } 00:24:28.020 ] 00:24:28.020 }' 00:24:28.020 12:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.020 12:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.020 12:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.020 12:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.020 12:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:28.279 [2024-07-25 12:06:14.271039] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:28.846 [2024-07-25 12:06:14.856591] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:29.105 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:29.105 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.106 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:29.106 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:29.106 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:29.106 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:29.106 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.106 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.106 [2024-07-25 12:06:15.067240] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:29.106 [2024-07-25 12:06:15.067409] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:29.364 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.364 "name": "raid_bdev1", 00:24:29.364 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:29.364 "strip_size_kb": 0, 00:24:29.364 "state": "online", 00:24:29.364 "raid_level": "raid1", 00:24:29.364 "superblock": false, 00:24:29.364 "num_base_bdevs": 4, 00:24:29.364 "num_base_bdevs_discovered": 3, 00:24:29.364 "num_base_bdevs_operational": 3, 00:24:29.364 "process": { 00:24:29.364 "type": "rebuild", 00:24:29.364 "target": "spare", 00:24:29.364 "progress": { 00:24:29.364 "blocks": 49152, 00:24:29.364 "percent": 75 00:24:29.364 } 00:24:29.364 }, 00:24:29.364 "base_bdevs_list": [ 00:24:29.364 { 00:24:29.364 "name": "spare", 00:24:29.364 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:29.364 "is_configured": true, 00:24:29.364 "data_offset": 0, 00:24:29.364 "data_size": 65536 00:24:29.364 }, 00:24:29.364 { 00:24:29.364 "name": null, 00:24:29.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.364 "is_configured": false, 00:24:29.364 "data_offset": 0, 00:24:29.364 "data_size": 65536 00:24:29.364 }, 00:24:29.364 { 00:24:29.364 "name": "BaseBdev3", 00:24:29.364 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:29.364 "is_configured": true, 00:24:29.364 "data_offset": 0, 00:24:29.364 "data_size": 65536 00:24:29.364 }, 00:24:29.364 { 00:24:29.364 "name": "BaseBdev4", 00:24:29.364 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:29.364 "is_configured": true, 00:24:29.364 "data_offset": 0, 00:24:29.364 "data_size": 65536 00:24:29.364 } 00:24:29.364 ] 00:24:29.364 }' 00:24:29.364 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:29.365 [2024-07-25 12:06:15.303681] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:29.365 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.365 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:29.365 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.365 12:06:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:29.365 [2024-07-25 12:06:15.411295] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:29.365 [2024-07-25 12:06:15.411463] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:30.301 [2024-07-25 12:06:16.189933] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:30.301 [2024-07-25 12:06:16.297578] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:30.301 [2024-07-25 12:06:16.299613] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.301 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.560 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.560 "name": "raid_bdev1", 00:24:30.560 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:30.560 "strip_size_kb": 0, 00:24:30.560 "state": "online", 00:24:30.560 "raid_level": "raid1", 00:24:30.560 "superblock": false, 00:24:30.560 "num_base_bdevs": 4, 00:24:30.560 "num_base_bdevs_discovered": 3, 00:24:30.560 "num_base_bdevs_operational": 3, 00:24:30.560 "base_bdevs_list": [ 00:24:30.560 { 00:24:30.560 "name": "spare", 00:24:30.560 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:30.560 "is_configured": true, 00:24:30.560 "data_offset": 0, 00:24:30.560 "data_size": 65536 00:24:30.560 }, 00:24:30.560 { 00:24:30.560 "name": null, 00:24:30.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.560 "is_configured": false, 00:24:30.560 "data_offset": 0, 00:24:30.560 "data_size": 65536 00:24:30.560 }, 00:24:30.560 { 00:24:30.560 "name": "BaseBdev3", 00:24:30.560 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:30.560 "is_configured": true, 00:24:30.560 "data_offset": 0, 00:24:30.560 "data_size": 65536 00:24:30.560 }, 00:24:30.560 { 00:24:30.560 "name": "BaseBdev4", 00:24:30.560 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:30.560 "is_configured": true, 00:24:30.560 "data_offset": 0, 00:24:30.560 "data_size": 65536 00:24:30.560 } 00:24:30.560 ] 00:24:30.560 }' 00:24:30.560 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:30.560 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:30.560 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.819 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.819 "name": "raid_bdev1", 00:24:30.819 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:30.819 "strip_size_kb": 0, 00:24:30.819 "state": "online", 00:24:30.819 "raid_level": "raid1", 00:24:30.819 "superblock": false, 00:24:30.819 "num_base_bdevs": 4, 00:24:30.819 "num_base_bdevs_discovered": 3, 00:24:30.819 "num_base_bdevs_operational": 3, 00:24:30.819 "base_bdevs_list": [ 00:24:30.819 { 00:24:30.819 "name": "spare", 00:24:30.819 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:30.819 "is_configured": true, 00:24:30.819 "data_offset": 0, 00:24:30.819 "data_size": 65536 00:24:30.819 }, 00:24:30.819 { 00:24:30.819 "name": null, 00:24:30.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.819 "is_configured": false, 00:24:30.819 "data_offset": 0, 00:24:30.819 "data_size": 65536 00:24:30.819 }, 00:24:30.819 { 00:24:30.819 "name": "BaseBdev3", 00:24:30.819 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:30.819 "is_configured": true, 00:24:30.819 "data_offset": 0, 00:24:30.819 "data_size": 65536 00:24:30.820 }, 00:24:30.820 { 00:24:30.820 "name": "BaseBdev4", 00:24:30.820 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:30.820 "is_configured": true, 00:24:30.820 "data_offset": 0, 00:24:30.820 "data_size": 65536 00:24:30.820 } 00:24:30.820 ] 00:24:30.820 }' 00:24:30.820 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.078 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.079 12:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.079 12:06:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.079 12:06:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.337 12:06:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.337 "name": "raid_bdev1", 00:24:31.337 "uuid": "7e09788e-6084-41fc-b3d9-c82efd2d8d35", 00:24:31.337 "strip_size_kb": 0, 00:24:31.337 "state": "online", 00:24:31.337 "raid_level": "raid1", 00:24:31.337 "superblock": false, 00:24:31.337 "num_base_bdevs": 4, 00:24:31.337 "num_base_bdevs_discovered": 3, 00:24:31.337 "num_base_bdevs_operational": 3, 00:24:31.337 "base_bdevs_list": [ 00:24:31.337 { 00:24:31.337 "name": "spare", 00:24:31.337 "uuid": "7205b861-5c3f-5265-b5b9-4405a4e5254e", 00:24:31.337 "is_configured": true, 00:24:31.337 "data_offset": 0, 00:24:31.337 "data_size": 65536 00:24:31.337 }, 00:24:31.337 { 00:24:31.337 "name": null, 00:24:31.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.337 "is_configured": false, 00:24:31.337 "data_offset": 0, 00:24:31.337 "data_size": 65536 00:24:31.337 }, 00:24:31.337 { 00:24:31.337 "name": "BaseBdev3", 00:24:31.337 "uuid": "817cb75b-ea58-5b71-a68e-c22a016bf921", 00:24:31.337 "is_configured": true, 00:24:31.337 "data_offset": 0, 00:24:31.337 "data_size": 65536 00:24:31.337 }, 00:24:31.337 { 00:24:31.337 "name": "BaseBdev4", 00:24:31.337 "uuid": "dd099014-073d-509e-aeef-c5344928db92", 00:24:31.337 "is_configured": true, 00:24:31.337 "data_offset": 0, 00:24:31.337 "data_size": 65536 00:24:31.337 } 00:24:31.337 ] 00:24:31.337 }' 00:24:31.337 12:06:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.337 12:06:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:31.905 12:06:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:31.905 [2024-07-25 12:06:18.003025] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.905 [2024-07-25 12:06:18.003054] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.164 00:24:32.164 Latency(us) 00:24:32.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.164 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:32.164 raid_bdev1 : 10.67 95.02 285.07 0.00 0.00 13308.17 271.97 111568.49 00:24:32.164 =================================================================================================================== 00:24:32.164 Total : 95.02 285.07 0.00 0.00 13308.17 271.97 111568.49 00:24:32.164 [2024-07-25 12:06:18.058786] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.164 [2024-07-25 12:06:18.058812] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.164 [2024-07-25 12:06:18.058897] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.164 [2024-07-25 12:06:18.058908] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10bb5b0 name raid_bdev1, state offline 00:24:32.164 0 00:24:32.164 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:32.164 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:32.423 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:32.423 /dev/nbd0 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:32.682 1+0 records in 00:24:32.682 1+0 records out 00:24:32.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255145 s, 16.1 MB/s 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:32.682 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:32.941 /dev/nbd1 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:32.941 1+0 records in 00:24:32.941 1+0 records out 00:24:32.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240128 s, 17.1 MB/s 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:32.941 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:33.200 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:33.458 /dev/nbd1 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:33.458 1+0 records in 00:24:33.458 1+0 records out 00:24:33.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259507 s, 15.8 MB/s 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.458 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.716 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 54735 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 54735 ']' 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 54735 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.975 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 54735 00:24:33.975 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:33.975 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:33.975 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 54735' 00:24:33.975 killing process with pid 54735 00:24:33.975 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 54735 00:24:33.975 Received shutdown signal, test time was about 12.622674 seconds 00:24:33.975 00:24:33.975 Latency(us) 00:24:33.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.975 =================================================================================================================== 00:24:33.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.975 [2024-07-25 12:06:20.011568] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.975 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 54735 00:24:33.975 [2024-07-25 12:06:20.046540] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:34.234 00:24:34.234 real 0m17.936s 00:24:34.234 user 0m27.828s 00:24:34.234 sys 0m3.140s 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.234 ************************************ 00:24:34.234 END TEST raid_rebuild_test_io 00:24:34.234 ************************************ 00:24:34.234 12:06:20 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:24:34.234 12:06:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:34.234 12:06:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:34.234 12:06:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:34.234 ************************************ 00:24:34.234 START TEST raid_rebuild_test_sb_io 00:24:34.234 ************************************ 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:34.234 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=58411 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 58411 /var/tmp/spdk-raid.sock 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 58411 ']' 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:34.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.235 12:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.494 [2024-07-25 12:06:20.395823] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:24:34.494 [2024-07-25 12:06:20.395880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58411 ] 00:24:34.494 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:34.494 Zero copy mechanism will not be used. 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.494 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:34.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:34.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:34.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:34.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:34.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:34.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:34.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.495 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:34.495 [2024-07-25 12:06:20.529911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.754 [2024-07-25 12:06:20.613230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.754 [2024-07-25 12:06:20.671170] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.754 [2024-07-25 12:06:20.671204] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.321 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.321 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:24:35.321 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:35.321 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:35.580 BaseBdev1_malloc 00:24:35.580 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:35.580 [2024-07-25 12:06:21.631720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:35.580 [2024-07-25 12:06:21.631765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.580 [2024-07-25 12:06:21.631784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df55f0 00:24:35.580 [2024-07-25 12:06:21.631795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.580 [2024-07-25 12:06:21.633238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.580 [2024-07-25 12:06:21.633267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:35.580 BaseBdev1 00:24:35.580 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:35.580 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:35.840 BaseBdev2_malloc 00:24:35.840 12:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:36.099 [2024-07-25 12:06:22.037045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:36.099 [2024-07-25 12:06:22.037084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.099 [2024-07-25 12:06:22.037100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f99130 00:24:36.099 [2024-07-25 12:06:22.037111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.099 [2024-07-25 12:06:22.038438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.099 [2024-07-25 12:06:22.038464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:36.099 BaseBdev2 00:24:36.099 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:36.099 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:36.358 BaseBdev3_malloc 00:24:36.358 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:36.358 [2024-07-25 12:06:22.382113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:36.358 [2024-07-25 12:06:22.382158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.358 [2024-07-25 12:06:22.382175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8f420 00:24:36.358 [2024-07-25 12:06:22.382191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.358 [2024-07-25 12:06:22.383452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.358 [2024-07-25 12:06:22.383477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:36.358 BaseBdev3 00:24:36.358 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:36.358 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:36.617 BaseBdev4_malloc 00:24:36.617 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:36.617 [2024-07-25 12:06:22.723098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:36.617 [2024-07-25 12:06:22.723134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.617 [2024-07-25 12:06:22.723154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8fd40 00:24:36.617 [2024-07-25 12:06:22.723165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.617 [2024-07-25 12:06:22.724409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.617 [2024-07-25 12:06:22.724433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:36.617 BaseBdev4 00:24:36.875 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:36.875 spare_malloc 00:24:36.875 12:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:37.133 spare_delay 00:24:37.133 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:37.133 [2024-07-25 12:06:23.232441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:37.133 [2024-07-25 12:06:23.232473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.133 [2024-07-25 12:06:23.232490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1deedb0 00:24:37.133 [2024-07-25 12:06:23.232501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.133 [2024-07-25 12:06:23.233757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.133 [2024-07-25 12:06:23.233781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:37.133 spare 00:24:37.133 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:37.391 [2024-07-25 12:06:23.445038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.391 [2024-07-25 12:06:23.446151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:37.391 [2024-07-25 12:06:23.446200] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:37.391 [2024-07-25 12:06:23.446241] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:37.391 [2024-07-25 12:06:23.446417] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1df15b0 00:24:37.391 [2024-07-25 12:06:23.446428] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:37.391 [2024-07-25 12:06:23.446596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1df1580 00:24:37.391 [2024-07-25 12:06:23.446731] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1df15b0 00:24:37.391 [2024-07-25 12:06:23.446740] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1df15b0 00:24:37.391 [2024-07-25 12:06:23.446827] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.392 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.649 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.649 "name": "raid_bdev1", 00:24:37.649 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:37.649 "strip_size_kb": 0, 00:24:37.649 "state": "online", 00:24:37.649 "raid_level": "raid1", 00:24:37.649 "superblock": true, 00:24:37.649 "num_base_bdevs": 4, 00:24:37.649 "num_base_bdevs_discovered": 4, 00:24:37.649 "num_base_bdevs_operational": 4, 00:24:37.649 "base_bdevs_list": [ 00:24:37.649 { 00:24:37.649 "name": "BaseBdev1", 00:24:37.649 "uuid": "d57b4f13-71e7-57b1-88fa-e8f2f6fffa4f", 00:24:37.649 "is_configured": true, 00:24:37.649 "data_offset": 2048, 00:24:37.649 "data_size": 63488 00:24:37.649 }, 00:24:37.649 { 00:24:37.649 "name": "BaseBdev2", 00:24:37.649 "uuid": "c5f87746-9a5b-592c-b6d3-46b254d2185b", 00:24:37.649 "is_configured": true, 00:24:37.649 "data_offset": 2048, 00:24:37.649 "data_size": 63488 00:24:37.649 }, 00:24:37.649 { 00:24:37.649 "name": "BaseBdev3", 00:24:37.649 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:37.649 "is_configured": true, 00:24:37.649 "data_offset": 2048, 00:24:37.649 "data_size": 63488 00:24:37.649 }, 00:24:37.649 { 00:24:37.649 "name": "BaseBdev4", 00:24:37.649 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:37.649 "is_configured": true, 00:24:37.649 "data_offset": 2048, 00:24:37.649 "data_size": 63488 00:24:37.649 } 00:24:37.649 ] 00:24:37.649 }' 00:24:37.649 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.649 12:06:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.216 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:38.216 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:38.474 [2024-07-25 12:06:24.488035] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.474 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:38.474 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.474 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:38.771 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:38.771 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:38.771 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:38.771 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:38.771 [2024-07-25 12:06:24.842663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8e490 00:24:38.771 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:38.771 Zero copy mechanism will not be used. 00:24:38.771 Running I/O for 60 seconds... 00:24:39.030 [2024-07-25 12:06:24.904755] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.030 [2024-07-25 12:06:24.912344] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1f8e490 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.030 12:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.289 12:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.289 "name": "raid_bdev1", 00:24:39.289 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:39.289 "strip_size_kb": 0, 00:24:39.289 "state": "online", 00:24:39.289 "raid_level": "raid1", 00:24:39.289 "superblock": true, 00:24:39.289 "num_base_bdevs": 4, 00:24:39.289 "num_base_bdevs_discovered": 3, 00:24:39.289 "num_base_bdevs_operational": 3, 00:24:39.289 "base_bdevs_list": [ 00:24:39.289 { 00:24:39.289 "name": null, 00:24:39.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.289 "is_configured": false, 00:24:39.289 "data_offset": 2048, 00:24:39.289 "data_size": 63488 00:24:39.289 }, 00:24:39.289 { 00:24:39.289 "name": "BaseBdev2", 00:24:39.289 "uuid": "c5f87746-9a5b-592c-b6d3-46b254d2185b", 00:24:39.289 "is_configured": true, 00:24:39.289 "data_offset": 2048, 00:24:39.289 "data_size": 63488 00:24:39.289 }, 00:24:39.289 { 00:24:39.289 "name": "BaseBdev3", 00:24:39.289 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:39.289 "is_configured": true, 00:24:39.289 "data_offset": 2048, 00:24:39.289 "data_size": 63488 00:24:39.289 }, 00:24:39.289 { 00:24:39.289 "name": "BaseBdev4", 00:24:39.289 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:39.289 "is_configured": true, 00:24:39.289 "data_offset": 2048, 00:24:39.289 "data_size": 63488 00:24:39.289 } 00:24:39.289 ] 00:24:39.289 }' 00:24:39.289 12:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.289 12:06:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:39.857 12:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.116 [2024-07-25 12:06:26.006669] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.116 12:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:40.116 [2024-07-25 12:06:26.074351] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8ca50 00:24:40.116 [2024-07-25 12:06:26.076550] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:40.116 [2024-07-25 12:06:26.193957] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:40.116 [2024-07-25 12:06:26.195063] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:40.374 [2024-07-25 12:06:26.415381] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:40.374 [2024-07-25 12:06:26.415515] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:40.942 [2024-07-25 12:06:26.758768] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:40.942 [2024-07-25 12:06:26.759885] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:40.942 [2024-07-25 12:06:26.998922] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:40.942 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.942 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:40.942 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:40.942 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:40.942 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.202 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.202 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.202 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.202 "name": "raid_bdev1", 00:24:41.202 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:41.202 "strip_size_kb": 0, 00:24:41.202 "state": "online", 00:24:41.202 "raid_level": "raid1", 00:24:41.202 "superblock": true, 00:24:41.202 "num_base_bdevs": 4, 00:24:41.202 "num_base_bdevs_discovered": 4, 00:24:41.202 "num_base_bdevs_operational": 4, 00:24:41.202 "process": { 00:24:41.202 "type": "rebuild", 00:24:41.202 "target": "spare", 00:24:41.202 "progress": { 00:24:41.202 "blocks": 12288, 00:24:41.202 "percent": 19 00:24:41.202 } 00:24:41.202 }, 00:24:41.202 "base_bdevs_list": [ 00:24:41.202 { 00:24:41.202 "name": "spare", 00:24:41.202 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:41.202 "is_configured": true, 00:24:41.202 "data_offset": 2048, 00:24:41.202 "data_size": 63488 00:24:41.202 }, 00:24:41.202 { 00:24:41.202 "name": "BaseBdev2", 00:24:41.202 "uuid": "c5f87746-9a5b-592c-b6d3-46b254d2185b", 00:24:41.202 "is_configured": true, 00:24:41.202 "data_offset": 2048, 00:24:41.202 "data_size": 63488 00:24:41.202 }, 00:24:41.202 { 00:24:41.202 "name": "BaseBdev3", 00:24:41.202 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:41.202 "is_configured": true, 00:24:41.202 "data_offset": 2048, 00:24:41.202 "data_size": 63488 00:24:41.202 }, 00:24:41.202 { 00:24:41.202 "name": "BaseBdev4", 00:24:41.202 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:41.202 "is_configured": true, 00:24:41.202 "data_offset": 2048, 00:24:41.202 "data_size": 63488 00:24:41.202 } 00:24:41.202 ] 00:24:41.202 }' 00:24:41.202 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.461 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.461 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:41.461 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.461 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:41.461 [2024-07-25 12:06:27.503617] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:41.721 [2024-07-25 12:06:27.586156] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.721 [2024-07-25 12:06:27.623318] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:41.721 [2024-07-25 12:06:27.734648] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:41.721 [2024-07-25 12:06:27.735990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.721 [2024-07-25 12:06:27.736015] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.721 [2024-07-25 12:06:27.736024] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:41.721 [2024-07-25 12:06:27.749675] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1f8e490 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.721 12:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.981 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.981 "name": "raid_bdev1", 00:24:41.981 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:41.981 "strip_size_kb": 0, 00:24:41.981 "state": "online", 00:24:41.981 "raid_level": "raid1", 00:24:41.981 "superblock": true, 00:24:41.981 "num_base_bdevs": 4, 00:24:41.981 "num_base_bdevs_discovered": 3, 00:24:41.981 "num_base_bdevs_operational": 3, 00:24:41.981 "base_bdevs_list": [ 00:24:41.981 { 00:24:41.981 "name": null, 00:24:41.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.981 "is_configured": false, 00:24:41.981 "data_offset": 2048, 00:24:41.981 "data_size": 63488 00:24:41.981 }, 00:24:41.981 { 00:24:41.981 "name": "BaseBdev2", 00:24:41.981 "uuid": "c5f87746-9a5b-592c-b6d3-46b254d2185b", 00:24:41.981 "is_configured": true, 00:24:41.981 "data_offset": 2048, 00:24:41.981 "data_size": 63488 00:24:41.981 }, 00:24:41.981 { 00:24:41.981 "name": "BaseBdev3", 00:24:41.981 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:41.981 "is_configured": true, 00:24:41.981 "data_offset": 2048, 00:24:41.981 "data_size": 63488 00:24:41.981 }, 00:24:41.981 { 00:24:41.981 "name": "BaseBdev4", 00:24:41.981 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:41.981 "is_configured": true, 00:24:41.981 "data_offset": 2048, 00:24:41.981 "data_size": 63488 00:24:41.981 } 00:24:41.981 ] 00:24:41.981 }' 00:24:41.981 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.981 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.549 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.808 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:42.808 "name": "raid_bdev1", 00:24:42.808 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:42.808 "strip_size_kb": 0, 00:24:42.808 "state": "online", 00:24:42.808 "raid_level": "raid1", 00:24:42.808 "superblock": true, 00:24:42.808 "num_base_bdevs": 4, 00:24:42.808 "num_base_bdevs_discovered": 3, 00:24:42.808 "num_base_bdevs_operational": 3, 00:24:42.808 "base_bdevs_list": [ 00:24:42.808 { 00:24:42.808 "name": null, 00:24:42.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.808 "is_configured": false, 00:24:42.808 "data_offset": 2048, 00:24:42.808 "data_size": 63488 00:24:42.808 }, 00:24:42.808 { 00:24:42.808 "name": "BaseBdev2", 00:24:42.808 "uuid": "c5f87746-9a5b-592c-b6d3-46b254d2185b", 00:24:42.808 "is_configured": true, 00:24:42.808 "data_offset": 2048, 00:24:42.808 "data_size": 63488 00:24:42.808 }, 00:24:42.808 { 00:24:42.808 "name": "BaseBdev3", 00:24:42.808 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:42.808 "is_configured": true, 00:24:42.808 "data_offset": 2048, 00:24:42.808 "data_size": 63488 00:24:42.808 }, 00:24:42.808 { 00:24:42.808 "name": "BaseBdev4", 00:24:42.808 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:42.808 "is_configured": true, 00:24:42.808 "data_offset": 2048, 00:24:42.808 "data_size": 63488 00:24:42.808 } 00:24:42.808 ] 00:24:42.808 }' 00:24:42.808 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:42.808 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:42.808 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.067 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:43.067 12:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:43.067 [2024-07-25 12:06:29.160753] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.326 12:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:43.326 [2024-07-25 12:06:29.221083] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8e730 00:24:43.326 [2024-07-25 12:06:29.222501] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:43.326 [2024-07-25 12:06:29.365090] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:43.326 [2024-07-25 12:06:29.366224] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:43.584 [2024-07-25 12:06:29.604052] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:43.585 [2024-07-25 12:06:29.604610] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:43.843 [2024-07-25 12:06:29.955305] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:44.101 [2024-07-25 12:06:30.183897] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.102 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.360 [2024-07-25 12:06:30.415545] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:44.360 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.360 "name": "raid_bdev1", 00:24:44.360 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:44.360 "strip_size_kb": 0, 00:24:44.360 "state": "online", 00:24:44.360 "raid_level": "raid1", 00:24:44.360 "superblock": true, 00:24:44.360 "num_base_bdevs": 4, 00:24:44.360 "num_base_bdevs_discovered": 4, 00:24:44.360 "num_base_bdevs_operational": 4, 00:24:44.360 "process": { 00:24:44.360 "type": "rebuild", 00:24:44.360 "target": "spare", 00:24:44.360 "progress": { 00:24:44.360 "blocks": 14336, 00:24:44.360 "percent": 22 00:24:44.360 } 00:24:44.360 }, 00:24:44.360 "base_bdevs_list": [ 00:24:44.360 { 00:24:44.360 "name": "spare", 00:24:44.360 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:44.360 "is_configured": true, 00:24:44.360 "data_offset": 2048, 00:24:44.360 "data_size": 63488 00:24:44.360 }, 00:24:44.360 { 00:24:44.360 "name": "BaseBdev2", 00:24:44.360 "uuid": "c5f87746-9a5b-592c-b6d3-46b254d2185b", 00:24:44.360 "is_configured": true, 00:24:44.360 "data_offset": 2048, 00:24:44.360 "data_size": 63488 00:24:44.360 }, 00:24:44.360 { 00:24:44.360 "name": "BaseBdev3", 00:24:44.360 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:44.360 "is_configured": true, 00:24:44.360 "data_offset": 2048, 00:24:44.360 "data_size": 63488 00:24:44.360 }, 00:24:44.360 { 00:24:44.360 "name": "BaseBdev4", 00:24:44.360 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:44.360 "is_configured": true, 00:24:44.360 "data_offset": 2048, 00:24:44.360 "data_size": 63488 00:24:44.360 } 00:24:44.360 ] 00:24:44.360 }' 00:24:44.360 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:44.619 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:44.619 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:44.619 [2024-07-25 12:06:30.578570] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:44.619 [2024-07-25 12:06:30.589609] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:44.878 [2024-07-25 12:06:30.755993] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:44.878 [2024-07-25 12:06:30.910055] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1f8e490 00:24:44.878 [2024-07-25 12:06:30.910083] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1f8e730 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.878 12:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.137 [2024-07-25 12:06:31.141566] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:45.137 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.137 "name": "raid_bdev1", 00:24:45.137 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:45.137 "strip_size_kb": 0, 00:24:45.137 "state": "online", 00:24:45.137 "raid_level": "raid1", 00:24:45.137 "superblock": true, 00:24:45.137 "num_base_bdevs": 4, 00:24:45.137 "num_base_bdevs_discovered": 3, 00:24:45.137 "num_base_bdevs_operational": 3, 00:24:45.137 "process": { 00:24:45.137 "type": "rebuild", 00:24:45.137 "target": "spare", 00:24:45.137 "progress": { 00:24:45.137 "blocks": 22528, 00:24:45.137 "percent": 35 00:24:45.137 } 00:24:45.137 }, 00:24:45.137 "base_bdevs_list": [ 00:24:45.137 { 00:24:45.137 "name": "spare", 00:24:45.137 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:45.137 "is_configured": true, 00:24:45.137 "data_offset": 2048, 00:24:45.137 "data_size": 63488 00:24:45.137 }, 00:24:45.137 { 00:24:45.137 "name": null, 00:24:45.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.137 "is_configured": false, 00:24:45.137 "data_offset": 2048, 00:24:45.137 "data_size": 63488 00:24:45.137 }, 00:24:45.137 { 00:24:45.137 "name": "BaseBdev3", 00:24:45.137 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:45.137 "is_configured": true, 00:24:45.137 "data_offset": 2048, 00:24:45.137 "data_size": 63488 00:24:45.137 }, 00:24:45.137 { 00:24:45.137 "name": "BaseBdev4", 00:24:45.137 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:45.138 "is_configured": true, 00:24:45.138 "data_offset": 2048, 00:24:45.138 "data_size": 63488 00:24:45.138 } 00:24:45.138 ] 00:24:45.138 }' 00:24:45.138 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.138 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.138 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=896 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.436 [2024-07-25 12:06:31.362333] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.436 "name": "raid_bdev1", 00:24:45.436 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:45.436 "strip_size_kb": 0, 00:24:45.436 "state": "online", 00:24:45.436 "raid_level": "raid1", 00:24:45.436 "superblock": true, 00:24:45.436 "num_base_bdevs": 4, 00:24:45.436 "num_base_bdevs_discovered": 3, 00:24:45.436 "num_base_bdevs_operational": 3, 00:24:45.436 "process": { 00:24:45.436 "type": "rebuild", 00:24:45.436 "target": "spare", 00:24:45.436 "progress": { 00:24:45.436 "blocks": 26624, 00:24:45.436 "percent": 41 00:24:45.436 } 00:24:45.436 }, 00:24:45.436 "base_bdevs_list": [ 00:24:45.436 { 00:24:45.436 "name": "spare", 00:24:45.436 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:45.436 "is_configured": true, 00:24:45.436 "data_offset": 2048, 00:24:45.436 "data_size": 63488 00:24:45.436 }, 00:24:45.436 { 00:24:45.436 "name": null, 00:24:45.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.436 "is_configured": false, 00:24:45.436 "data_offset": 2048, 00:24:45.436 "data_size": 63488 00:24:45.436 }, 00:24:45.436 { 00:24:45.436 "name": "BaseBdev3", 00:24:45.436 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:45.436 "is_configured": true, 00:24:45.436 "data_offset": 2048, 00:24:45.436 "data_size": 63488 00:24:45.436 }, 00:24:45.436 { 00:24:45.436 "name": "BaseBdev4", 00:24:45.436 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:45.436 "is_configured": true, 00:24:45.436 "data_offset": 2048, 00:24:45.436 "data_size": 63488 00:24:45.436 } 00:24:45.436 ] 00:24:45.436 }' 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.436 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.695 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.695 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:45.695 [2024-07-25 12:06:31.801953] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:46.631 [2024-07-25 12:06:32.509997] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.631 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.631 [2024-07-25 12:06:32.628445] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:46.890 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:46.890 "name": "raid_bdev1", 00:24:46.890 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:46.890 "strip_size_kb": 0, 00:24:46.890 "state": "online", 00:24:46.890 "raid_level": "raid1", 00:24:46.890 "superblock": true, 00:24:46.890 "num_base_bdevs": 4, 00:24:46.890 "num_base_bdevs_discovered": 3, 00:24:46.890 "num_base_bdevs_operational": 3, 00:24:46.890 "process": { 00:24:46.890 "type": "rebuild", 00:24:46.890 "target": "spare", 00:24:46.890 "progress": { 00:24:46.890 "blocks": 47104, 00:24:46.890 "percent": 74 00:24:46.890 } 00:24:46.890 }, 00:24:46.890 "base_bdevs_list": [ 00:24:46.890 { 00:24:46.890 "name": "spare", 00:24:46.890 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:46.890 "is_configured": true, 00:24:46.890 "data_offset": 2048, 00:24:46.890 "data_size": 63488 00:24:46.890 }, 00:24:46.890 { 00:24:46.890 "name": null, 00:24:46.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.890 "is_configured": false, 00:24:46.890 "data_offset": 2048, 00:24:46.890 "data_size": 63488 00:24:46.890 }, 00:24:46.890 { 00:24:46.890 "name": "BaseBdev3", 00:24:46.890 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:46.890 "is_configured": true, 00:24:46.890 "data_offset": 2048, 00:24:46.890 "data_size": 63488 00:24:46.890 }, 00:24:46.890 { 00:24:46.890 "name": "BaseBdev4", 00:24:46.890 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:46.890 "is_configured": true, 00:24:46.890 "data_offset": 2048, 00:24:46.890 "data_size": 63488 00:24:46.890 } 00:24:46.890 ] 00:24:46.890 }' 00:24:46.890 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:46.890 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.890 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:46.890 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.890 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:46.890 [2024-07-25 12:06:32.967151] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:47.149 [2024-07-25 12:06:33.194159] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:47.414 [2024-07-25 12:06:33.530561] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.982 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.982 [2024-07-25 12:06:33.981057] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:47.982 [2024-07-25 12:06:34.088688] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:47.982 [2024-07-25 12:06:34.091944] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.242 "name": "raid_bdev1", 00:24:48.242 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:48.242 "strip_size_kb": 0, 00:24:48.242 "state": "online", 00:24:48.242 "raid_level": "raid1", 00:24:48.242 "superblock": true, 00:24:48.242 "num_base_bdevs": 4, 00:24:48.242 "num_base_bdevs_discovered": 3, 00:24:48.242 "num_base_bdevs_operational": 3, 00:24:48.242 "base_bdevs_list": [ 00:24:48.242 { 00:24:48.242 "name": "spare", 00:24:48.242 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:48.242 "is_configured": true, 00:24:48.242 "data_offset": 2048, 00:24:48.242 "data_size": 63488 00:24:48.242 }, 00:24:48.242 { 00:24:48.242 "name": null, 00:24:48.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.242 "is_configured": false, 00:24:48.242 "data_offset": 2048, 00:24:48.242 "data_size": 63488 00:24:48.242 }, 00:24:48.242 { 00:24:48.242 "name": "BaseBdev3", 00:24:48.242 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:48.242 "is_configured": true, 00:24:48.242 "data_offset": 2048, 00:24:48.242 "data_size": 63488 00:24:48.242 }, 00:24:48.242 { 00:24:48.242 "name": "BaseBdev4", 00:24:48.242 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:48.242 "is_configured": true, 00:24:48.242 "data_offset": 2048, 00:24:48.242 "data_size": 63488 00:24:48.242 } 00:24:48.242 ] 00:24:48.242 }' 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.242 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.504 "name": "raid_bdev1", 00:24:48.504 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:48.504 "strip_size_kb": 0, 00:24:48.504 "state": "online", 00:24:48.504 "raid_level": "raid1", 00:24:48.504 "superblock": true, 00:24:48.504 "num_base_bdevs": 4, 00:24:48.504 "num_base_bdevs_discovered": 3, 00:24:48.504 "num_base_bdevs_operational": 3, 00:24:48.504 "base_bdevs_list": [ 00:24:48.504 { 00:24:48.504 "name": "spare", 00:24:48.504 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:48.504 "is_configured": true, 00:24:48.504 "data_offset": 2048, 00:24:48.504 "data_size": 63488 00:24:48.504 }, 00:24:48.504 { 00:24:48.504 "name": null, 00:24:48.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.504 "is_configured": false, 00:24:48.504 "data_offset": 2048, 00:24:48.504 "data_size": 63488 00:24:48.504 }, 00:24:48.504 { 00:24:48.504 "name": "BaseBdev3", 00:24:48.504 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:48.504 "is_configured": true, 00:24:48.504 "data_offset": 2048, 00:24:48.504 "data_size": 63488 00:24:48.504 }, 00:24:48.504 { 00:24:48.504 "name": "BaseBdev4", 00:24:48.504 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:48.504 "is_configured": true, 00:24:48.504 "data_offset": 2048, 00:24:48.504 "data_size": 63488 00:24:48.504 } 00:24:48.504 ] 00:24:48.504 }' 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.504 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.764 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.764 "name": "raid_bdev1", 00:24:48.764 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:48.764 "strip_size_kb": 0, 00:24:48.764 "state": "online", 00:24:48.764 "raid_level": "raid1", 00:24:48.764 "superblock": true, 00:24:48.764 "num_base_bdevs": 4, 00:24:48.764 "num_base_bdevs_discovered": 3, 00:24:48.764 "num_base_bdevs_operational": 3, 00:24:48.764 "base_bdevs_list": [ 00:24:48.764 { 00:24:48.764 "name": "spare", 00:24:48.764 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:48.764 "is_configured": true, 00:24:48.764 "data_offset": 2048, 00:24:48.764 "data_size": 63488 00:24:48.764 }, 00:24:48.764 { 00:24:48.764 "name": null, 00:24:48.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.764 "is_configured": false, 00:24:48.764 "data_offset": 2048, 00:24:48.764 "data_size": 63488 00:24:48.764 }, 00:24:48.764 { 00:24:48.764 "name": "BaseBdev3", 00:24:48.764 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:48.764 "is_configured": true, 00:24:48.764 "data_offset": 2048, 00:24:48.764 "data_size": 63488 00:24:48.764 }, 00:24:48.764 { 00:24:48.764 "name": "BaseBdev4", 00:24:48.764 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:48.764 "is_configured": true, 00:24:48.764 "data_offset": 2048, 00:24:48.764 "data_size": 63488 00:24:48.764 } 00:24:48.764 ] 00:24:48.764 }' 00:24:48.764 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.764 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.331 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:49.590 [2024-07-25 12:06:35.556413] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.590 [2024-07-25 12:06:35.556445] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:49.590 00:24:49.590 Latency(us) 00:24:49.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.590 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:49.590 raid_bdev1 : 10.76 96.12 288.36 0.00 0.00 13865.48 275.25 111568.49 00:24:49.590 =================================================================================================================== 00:24:49.590 Total : 96.12 288.36 0.00 0.00 13865.48 275.25 111568.49 00:24:49.590 [2024-07-25 12:06:35.632260] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.590 [2024-07-25 12:06:35.632288] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:49.590 [2024-07-25 12:06:35.632382] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:49.590 [2024-07-25 12:06:35.632394] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1df15b0 name raid_bdev1, state offline 00:24:49.590 0 00:24:49.590 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.590 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:49.849 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:49.849 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.850 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:50.108 /dev/nbd0 00:24:50.108 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:50.108 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:50.108 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:50.108 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.109 1+0 records in 00:24:50.109 1+0 records out 00:24:50.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263509 s, 15.5 MB/s 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.109 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:50.367 /dev/nbd1 00:24:50.367 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:50.367 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:50.367 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:50.367 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.368 1+0 records in 00:24:50.368 1+0 records out 00:24:50.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025546 s, 16.0 MB/s 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.368 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.626 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:50.885 /dev/nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.885 1+0 records in 00:24:50.885 1+0 records out 00:24:50.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202326 s, 20.2 MB/s 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.885 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:51.143 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.402 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:51.660 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:51.919 [2024-07-25 12:06:37.962545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:51.919 [2024-07-25 12:06:37.962590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.919 [2024-07-25 12:06:37.962608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df0460 00:24:51.919 [2024-07-25 12:06:37.962624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.919 [2024-07-25 12:06:37.964151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.919 [2024-07-25 12:06:37.964178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:51.919 [2024-07-25 12:06:37.964249] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:51.919 [2024-07-25 12:06:37.964275] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:51.919 [2024-07-25 12:06:37.964369] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:51.919 [2024-07-25 12:06:37.964436] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:51.919 spare 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.919 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.179 [2024-07-25 12:06:38.064749] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1df43c0 00:24:52.179 [2024-07-25 12:06:38.064762] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:52.179 [2024-07-25 12:06:38.064940] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dec880 00:24:52.179 [2024-07-25 12:06:38.065071] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1df43c0 00:24:52.179 [2024-07-25 12:06:38.065081] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1df43c0 00:24:52.179 [2024-07-25 12:06:38.065188] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.179 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.179 "name": "raid_bdev1", 00:24:52.179 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:52.179 "strip_size_kb": 0, 00:24:52.179 "state": "online", 00:24:52.179 "raid_level": "raid1", 00:24:52.179 "superblock": true, 00:24:52.179 "num_base_bdevs": 4, 00:24:52.179 "num_base_bdevs_discovered": 3, 00:24:52.179 "num_base_bdevs_operational": 3, 00:24:52.179 "base_bdevs_list": [ 00:24:52.179 { 00:24:52.179 "name": "spare", 00:24:52.179 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:52.179 "is_configured": true, 00:24:52.179 "data_offset": 2048, 00:24:52.179 "data_size": 63488 00:24:52.179 }, 00:24:52.179 { 00:24:52.179 "name": null, 00:24:52.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.179 "is_configured": false, 00:24:52.179 "data_offset": 2048, 00:24:52.179 "data_size": 63488 00:24:52.179 }, 00:24:52.179 { 00:24:52.179 "name": "BaseBdev3", 00:24:52.179 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:52.179 "is_configured": true, 00:24:52.179 "data_offset": 2048, 00:24:52.179 "data_size": 63488 00:24:52.179 }, 00:24:52.179 { 00:24:52.179 "name": "BaseBdev4", 00:24:52.179 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:52.179 "is_configured": true, 00:24:52.179 "data_offset": 2048, 00:24:52.179 "data_size": 63488 00:24:52.179 } 00:24:52.179 ] 00:24:52.179 }' 00:24:52.179 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.179 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.785 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:53.362 "name": "raid_bdev1", 00:24:53.362 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:53.362 "strip_size_kb": 0, 00:24:53.362 "state": "online", 00:24:53.362 "raid_level": "raid1", 00:24:53.362 "superblock": true, 00:24:53.362 "num_base_bdevs": 4, 00:24:53.362 "num_base_bdevs_discovered": 3, 00:24:53.362 "num_base_bdevs_operational": 3, 00:24:53.362 "base_bdevs_list": [ 00:24:53.362 { 00:24:53.362 "name": "spare", 00:24:53.362 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:53.362 "is_configured": true, 00:24:53.362 "data_offset": 2048, 00:24:53.362 "data_size": 63488 00:24:53.362 }, 00:24:53.362 { 00:24:53.362 "name": null, 00:24:53.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.362 "is_configured": false, 00:24:53.362 "data_offset": 2048, 00:24:53.362 "data_size": 63488 00:24:53.362 }, 00:24:53.362 { 00:24:53.362 "name": "BaseBdev3", 00:24:53.362 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:53.362 "is_configured": true, 00:24:53.362 "data_offset": 2048, 00:24:53.362 "data_size": 63488 00:24:53.362 }, 00:24:53.362 { 00:24:53.362 "name": "BaseBdev4", 00:24:53.362 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:53.362 "is_configured": true, 00:24:53.362 "data_offset": 2048, 00:24:53.362 "data_size": 63488 00:24:53.362 } 00:24:53.362 ] 00:24:53.362 }' 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.362 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:53.621 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.621 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:53.880 [2024-07-25 12:06:39.787689] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.880 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.139 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.139 "name": "raid_bdev1", 00:24:54.139 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:54.139 "strip_size_kb": 0, 00:24:54.139 "state": "online", 00:24:54.139 "raid_level": "raid1", 00:24:54.139 "superblock": true, 00:24:54.139 "num_base_bdevs": 4, 00:24:54.139 "num_base_bdevs_discovered": 2, 00:24:54.139 "num_base_bdevs_operational": 2, 00:24:54.139 "base_bdevs_list": [ 00:24:54.139 { 00:24:54.139 "name": null, 00:24:54.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.139 "is_configured": false, 00:24:54.139 "data_offset": 2048, 00:24:54.139 "data_size": 63488 00:24:54.139 }, 00:24:54.139 { 00:24:54.139 "name": null, 00:24:54.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.139 "is_configured": false, 00:24:54.139 "data_offset": 2048, 00:24:54.139 "data_size": 63488 00:24:54.139 }, 00:24:54.139 { 00:24:54.139 "name": "BaseBdev3", 00:24:54.139 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:54.139 "is_configured": true, 00:24:54.139 "data_offset": 2048, 00:24:54.139 "data_size": 63488 00:24:54.139 }, 00:24:54.139 { 00:24:54.139 "name": "BaseBdev4", 00:24:54.139 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:54.139 "is_configured": true, 00:24:54.139 "data_offset": 2048, 00:24:54.139 "data_size": 63488 00:24:54.139 } 00:24:54.139 ] 00:24:54.139 }' 00:24:54.139 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.139 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.706 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:54.706 [2024-07-25 12:06:40.806510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.706 [2024-07-25 12:06:40.806651] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:54.706 [2024-07-25 12:06:40.806666] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:54.706 [2024-07-25 12:06:40.806693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.706 [2024-07-25 12:06:40.810950] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1df1230 00:24:54.706 [2024-07-25 12:06:40.813254] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:54.964 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.898 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.157 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:56.157 "name": "raid_bdev1", 00:24:56.157 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:56.157 "strip_size_kb": 0, 00:24:56.157 "state": "online", 00:24:56.157 "raid_level": "raid1", 00:24:56.157 "superblock": true, 00:24:56.157 "num_base_bdevs": 4, 00:24:56.157 "num_base_bdevs_discovered": 3, 00:24:56.157 "num_base_bdevs_operational": 3, 00:24:56.157 "process": { 00:24:56.157 "type": "rebuild", 00:24:56.157 "target": "spare", 00:24:56.157 "progress": { 00:24:56.157 "blocks": 24576, 00:24:56.157 "percent": 38 00:24:56.157 } 00:24:56.157 }, 00:24:56.157 "base_bdevs_list": [ 00:24:56.157 { 00:24:56.157 "name": "spare", 00:24:56.157 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:56.157 "is_configured": true, 00:24:56.157 "data_offset": 2048, 00:24:56.157 "data_size": 63488 00:24:56.157 }, 00:24:56.157 { 00:24:56.157 "name": null, 00:24:56.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.157 "is_configured": false, 00:24:56.157 "data_offset": 2048, 00:24:56.157 "data_size": 63488 00:24:56.157 }, 00:24:56.157 { 00:24:56.157 "name": "BaseBdev3", 00:24:56.157 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:56.157 "is_configured": true, 00:24:56.157 "data_offset": 2048, 00:24:56.157 "data_size": 63488 00:24:56.157 }, 00:24:56.157 { 00:24:56.157 "name": "BaseBdev4", 00:24:56.157 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:56.157 "is_configured": true, 00:24:56.157 "data_offset": 2048, 00:24:56.157 "data_size": 63488 00:24:56.157 } 00:24:56.157 ] 00:24:56.157 }' 00:24:56.157 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:56.157 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.157 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:56.157 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.157 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:56.417 [2024-07-25 12:06:42.352858] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.417 [2024-07-25 12:06:42.425016] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:56.417 [2024-07-25 12:06:42.425062] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.417 [2024-07-25 12:06:42.425077] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.417 [2024-07-25 12:06:42.425085] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.417 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.676 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.676 "name": "raid_bdev1", 00:24:56.676 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:56.676 "strip_size_kb": 0, 00:24:56.676 "state": "online", 00:24:56.676 "raid_level": "raid1", 00:24:56.676 "superblock": true, 00:24:56.676 "num_base_bdevs": 4, 00:24:56.676 "num_base_bdevs_discovered": 2, 00:24:56.676 "num_base_bdevs_operational": 2, 00:24:56.676 "base_bdevs_list": [ 00:24:56.676 { 00:24:56.676 "name": null, 00:24:56.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.676 "is_configured": false, 00:24:56.676 "data_offset": 2048, 00:24:56.676 "data_size": 63488 00:24:56.676 }, 00:24:56.676 { 00:24:56.676 "name": null, 00:24:56.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.676 "is_configured": false, 00:24:56.676 "data_offset": 2048, 00:24:56.676 "data_size": 63488 00:24:56.676 }, 00:24:56.676 { 00:24:56.676 "name": "BaseBdev3", 00:24:56.676 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:56.676 "is_configured": true, 00:24:56.676 "data_offset": 2048, 00:24:56.676 "data_size": 63488 00:24:56.676 }, 00:24:56.676 { 00:24:56.676 "name": "BaseBdev4", 00:24:56.676 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:56.676 "is_configured": true, 00:24:56.676 "data_offset": 2048, 00:24:56.676 "data_size": 63488 00:24:56.676 } 00:24:56.676 ] 00:24:56.676 }' 00:24:56.676 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.676 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:57.244 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:57.514 [2024-07-25 12:06:43.464304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:57.514 [2024-07-25 12:06:43.464351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.514 [2024-07-25 12:06:43.464370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df1e10 00:24:57.514 [2024-07-25 12:06:43.464382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.514 [2024-07-25 12:06:43.464739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.514 [2024-07-25 12:06:43.464756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:57.514 [2024-07-25 12:06:43.464834] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:57.514 [2024-07-25 12:06:43.464846] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:57.514 [2024-07-25 12:06:43.464856] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:57.514 [2024-07-25 12:06:43.464873] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:57.514 [2024-07-25 12:06:43.469107] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1df20a0 00:24:57.514 spare 00:24:57.514 [2024-07-25 12:06:43.470407] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.514 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.449 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.708 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:58.708 "name": "raid_bdev1", 00:24:58.708 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:58.708 "strip_size_kb": 0, 00:24:58.708 "state": "online", 00:24:58.708 "raid_level": "raid1", 00:24:58.708 "superblock": true, 00:24:58.708 "num_base_bdevs": 4, 00:24:58.708 "num_base_bdevs_discovered": 3, 00:24:58.708 "num_base_bdevs_operational": 3, 00:24:58.708 "process": { 00:24:58.708 "type": "rebuild", 00:24:58.708 "target": "spare", 00:24:58.708 "progress": { 00:24:58.708 "blocks": 24576, 00:24:58.708 "percent": 38 00:24:58.708 } 00:24:58.708 }, 00:24:58.708 "base_bdevs_list": [ 00:24:58.708 { 00:24:58.708 "name": "spare", 00:24:58.708 "uuid": "40467220-9a46-51f3-8537-7fda0c9ec996", 00:24:58.708 "is_configured": true, 00:24:58.708 "data_offset": 2048, 00:24:58.708 "data_size": 63488 00:24:58.708 }, 00:24:58.708 { 00:24:58.708 "name": null, 00:24:58.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.708 "is_configured": false, 00:24:58.708 "data_offset": 2048, 00:24:58.708 "data_size": 63488 00:24:58.708 }, 00:24:58.708 { 00:24:58.708 "name": "BaseBdev3", 00:24:58.708 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:58.708 "is_configured": true, 00:24:58.708 "data_offset": 2048, 00:24:58.708 "data_size": 63488 00:24:58.708 }, 00:24:58.708 { 00:24:58.708 "name": "BaseBdev4", 00:24:58.708 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:58.708 "is_configured": true, 00:24:58.708 "data_offset": 2048, 00:24:58.708 "data_size": 63488 00:24:58.708 } 00:24:58.708 ] 00:24:58.708 }' 00:24:58.708 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:58.708 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.708 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:58.708 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.708 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:58.967 [2024-07-25 12:06:45.002653] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.967 [2024-07-25 12:06:45.082208] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:58.967 [2024-07-25 12:06:45.082250] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.967 [2024-07-25 12:06:45.082264] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.967 [2024-07-25 12:06:45.082272] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:59.225 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:59.225 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.225 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.226 "name": "raid_bdev1", 00:24:59.226 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:24:59.226 "strip_size_kb": 0, 00:24:59.226 "state": "online", 00:24:59.226 "raid_level": "raid1", 00:24:59.226 "superblock": true, 00:24:59.226 "num_base_bdevs": 4, 00:24:59.226 "num_base_bdevs_discovered": 2, 00:24:59.226 "num_base_bdevs_operational": 2, 00:24:59.226 "base_bdevs_list": [ 00:24:59.226 { 00:24:59.226 "name": null, 00:24:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.226 "is_configured": false, 00:24:59.226 "data_offset": 2048, 00:24:59.226 "data_size": 63488 00:24:59.226 }, 00:24:59.226 { 00:24:59.226 "name": null, 00:24:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.226 "is_configured": false, 00:24:59.226 "data_offset": 2048, 00:24:59.226 "data_size": 63488 00:24:59.226 }, 00:24:59.226 { 00:24:59.226 "name": "BaseBdev3", 00:24:59.226 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:24:59.226 "is_configured": true, 00:24:59.226 "data_offset": 2048, 00:24:59.226 "data_size": 63488 00:24:59.226 }, 00:24:59.226 { 00:24:59.226 "name": "BaseBdev4", 00:24:59.226 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:24:59.226 "is_configured": true, 00:24:59.226 "data_offset": 2048, 00:24:59.226 "data_size": 63488 00:24:59.226 } 00:24:59.226 ] 00:24:59.226 }' 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.226 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.793 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:59.793 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:59.793 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:59.793 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:59.793 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:00.052 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.052 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.052 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.052 "name": "raid_bdev1", 00:25:00.052 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:25:00.052 "strip_size_kb": 0, 00:25:00.052 "state": "online", 00:25:00.052 "raid_level": "raid1", 00:25:00.052 "superblock": true, 00:25:00.052 "num_base_bdevs": 4, 00:25:00.052 "num_base_bdevs_discovered": 2, 00:25:00.052 "num_base_bdevs_operational": 2, 00:25:00.052 "base_bdevs_list": [ 00:25:00.052 { 00:25:00.052 "name": null, 00:25:00.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.052 "is_configured": false, 00:25:00.052 "data_offset": 2048, 00:25:00.052 "data_size": 63488 00:25:00.052 }, 00:25:00.052 { 00:25:00.052 "name": null, 00:25:00.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.052 "is_configured": false, 00:25:00.052 "data_offset": 2048, 00:25:00.052 "data_size": 63488 00:25:00.052 }, 00:25:00.052 { 00:25:00.052 "name": "BaseBdev3", 00:25:00.052 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:25:00.052 "is_configured": true, 00:25:00.052 "data_offset": 2048, 00:25:00.052 "data_size": 63488 00:25:00.052 }, 00:25:00.052 { 00:25:00.052 "name": "BaseBdev4", 00:25:00.052 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:25:00.052 "is_configured": true, 00:25:00.052 "data_offset": 2048, 00:25:00.052 "data_size": 63488 00:25:00.052 } 00:25:00.052 ] 00:25:00.052 }' 00:25:00.052 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.311 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:00.311 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.311 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:00.311 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:00.570 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:00.570 [2024-07-25 12:06:46.646544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:00.570 [2024-07-25 12:06:46.646590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.570 [2024-07-25 12:06:46.646608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e89ed0 00:25:00.570 [2024-07-25 12:06:46.646619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.570 [2024-07-25 12:06:46.646947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.570 [2024-07-25 12:06:46.646964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:00.570 [2024-07-25 12:06:46.647024] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:00.570 [2024-07-25 12:06:46.647035] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:00.570 [2024-07-25 12:06:46.647045] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:00.570 BaseBdev1 00:25:00.570 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.947 "name": "raid_bdev1", 00:25:01.947 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:25:01.947 "strip_size_kb": 0, 00:25:01.947 "state": "online", 00:25:01.947 "raid_level": "raid1", 00:25:01.947 "superblock": true, 00:25:01.947 "num_base_bdevs": 4, 00:25:01.947 "num_base_bdevs_discovered": 2, 00:25:01.947 "num_base_bdevs_operational": 2, 00:25:01.947 "base_bdevs_list": [ 00:25:01.947 { 00:25:01.947 "name": null, 00:25:01.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.947 "is_configured": false, 00:25:01.947 "data_offset": 2048, 00:25:01.947 "data_size": 63488 00:25:01.947 }, 00:25:01.947 { 00:25:01.947 "name": null, 00:25:01.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.947 "is_configured": false, 00:25:01.947 "data_offset": 2048, 00:25:01.947 "data_size": 63488 00:25:01.947 }, 00:25:01.947 { 00:25:01.947 "name": "BaseBdev3", 00:25:01.947 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:25:01.947 "is_configured": true, 00:25:01.947 "data_offset": 2048, 00:25:01.947 "data_size": 63488 00:25:01.947 }, 00:25:01.947 { 00:25:01.947 "name": "BaseBdev4", 00:25:01.947 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:25:01.947 "is_configured": true, 00:25:01.947 "data_offset": 2048, 00:25:01.947 "data_size": 63488 00:25:01.947 } 00:25:01.947 ] 00:25:01.947 }' 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.947 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.515 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:02.774 "name": "raid_bdev1", 00:25:02.774 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:25:02.774 "strip_size_kb": 0, 00:25:02.774 "state": "online", 00:25:02.774 "raid_level": "raid1", 00:25:02.774 "superblock": true, 00:25:02.774 "num_base_bdevs": 4, 00:25:02.774 "num_base_bdevs_discovered": 2, 00:25:02.774 "num_base_bdevs_operational": 2, 00:25:02.774 "base_bdevs_list": [ 00:25:02.774 { 00:25:02.774 "name": null, 00:25:02.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.774 "is_configured": false, 00:25:02.774 "data_offset": 2048, 00:25:02.774 "data_size": 63488 00:25:02.774 }, 00:25:02.774 { 00:25:02.774 "name": null, 00:25:02.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.774 "is_configured": false, 00:25:02.774 "data_offset": 2048, 00:25:02.774 "data_size": 63488 00:25:02.774 }, 00:25:02.774 { 00:25:02.774 "name": "BaseBdev3", 00:25:02.774 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:25:02.774 "is_configured": true, 00:25:02.774 "data_offset": 2048, 00:25:02.774 "data_size": 63488 00:25:02.774 }, 00:25:02.774 { 00:25:02.774 "name": "BaseBdev4", 00:25:02.774 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:25:02.774 "is_configured": true, 00:25:02.774 "data_offset": 2048, 00:25:02.774 "data_size": 63488 00:25:02.774 } 00:25:02.774 ] 00:25:02.774 }' 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:02.774 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:03.033 [2024-07-25 12:06:49.001045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.033 [2024-07-25 12:06:49.001169] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:03.033 [2024-07-25 12:06:49.001184] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:03.033 request: 00:25:03.033 { 00:25:03.033 "base_bdev": "BaseBdev1", 00:25:03.033 "raid_bdev": "raid_bdev1", 00:25:03.033 "method": "bdev_raid_add_base_bdev", 00:25:03.033 "req_id": 1 00:25:03.033 } 00:25:03.033 Got JSON-RPC error response 00:25:03.033 response: 00:25:03.033 { 00:25:03.033 "code": -22, 00:25:03.033 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:03.033 } 00:25:03.033 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:25:03.033 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:03.033 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:03.033 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:03.033 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.969 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.228 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.228 "name": "raid_bdev1", 00:25:04.228 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:25:04.228 "strip_size_kb": 0, 00:25:04.228 "state": "online", 00:25:04.228 "raid_level": "raid1", 00:25:04.228 "superblock": true, 00:25:04.228 "num_base_bdevs": 4, 00:25:04.228 "num_base_bdevs_discovered": 2, 00:25:04.228 "num_base_bdevs_operational": 2, 00:25:04.228 "base_bdevs_list": [ 00:25:04.228 { 00:25:04.228 "name": null, 00:25:04.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.228 "is_configured": false, 00:25:04.228 "data_offset": 2048, 00:25:04.228 "data_size": 63488 00:25:04.228 }, 00:25:04.228 { 00:25:04.228 "name": null, 00:25:04.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.228 "is_configured": false, 00:25:04.228 "data_offset": 2048, 00:25:04.228 "data_size": 63488 00:25:04.228 }, 00:25:04.228 { 00:25:04.228 "name": "BaseBdev3", 00:25:04.228 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:25:04.228 "is_configured": true, 00:25:04.228 "data_offset": 2048, 00:25:04.228 "data_size": 63488 00:25:04.228 }, 00:25:04.228 { 00:25:04.228 "name": "BaseBdev4", 00:25:04.228 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:25:04.228 "is_configured": true, 00:25:04.228 "data_offset": 2048, 00:25:04.228 "data_size": 63488 00:25:04.228 } 00:25:04.228 ] 00:25:04.228 }' 00:25:04.228 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.228 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.795 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:04.795 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:04.796 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:04.796 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:04.796 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:04.796 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.796 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:05.054 "name": "raid_bdev1", 00:25:05.054 "uuid": "260abdd6-5e0f-48f7-a5e8-99013b4fb290", 00:25:05.054 "strip_size_kb": 0, 00:25:05.054 "state": "online", 00:25:05.054 "raid_level": "raid1", 00:25:05.054 "superblock": true, 00:25:05.054 "num_base_bdevs": 4, 00:25:05.054 "num_base_bdevs_discovered": 2, 00:25:05.054 "num_base_bdevs_operational": 2, 00:25:05.054 "base_bdevs_list": [ 00:25:05.054 { 00:25:05.054 "name": null, 00:25:05.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.054 "is_configured": false, 00:25:05.054 "data_offset": 2048, 00:25:05.054 "data_size": 63488 00:25:05.054 }, 00:25:05.054 { 00:25:05.054 "name": null, 00:25:05.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.054 "is_configured": false, 00:25:05.054 "data_offset": 2048, 00:25:05.054 "data_size": 63488 00:25:05.054 }, 00:25:05.054 { 00:25:05.054 "name": "BaseBdev3", 00:25:05.054 "uuid": "0d11458e-6d30-59de-90af-da0dfba92879", 00:25:05.054 "is_configured": true, 00:25:05.054 "data_offset": 2048, 00:25:05.054 "data_size": 63488 00:25:05.054 }, 00:25:05.054 { 00:25:05.054 "name": "BaseBdev4", 00:25:05.054 "uuid": "92c3debc-1b79-5333-a671-c8ec75fd6b0d", 00:25:05.054 "is_configured": true, 00:25:05.054 "data_offset": 2048, 00:25:05.054 "data_size": 63488 00:25:05.054 } 00:25:05.054 ] 00:25:05.054 }' 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 58411 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 58411 ']' 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 58411 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.054 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58411 00:25:05.313 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.313 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.313 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58411' 00:25:05.313 killing process with pid 58411 00:25:05.313 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 58411 00:25:05.313 Received shutdown signal, test time was about 26.299993 seconds 00:25:05.313 00:25:05.313 Latency(us) 00:25:05.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.313 =================================================================================================================== 00:25:05.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.313 [2024-07-25 12:06:51.208918] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:05.313 [2024-07-25 12:06:51.209014] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.313 [2024-07-25 12:06:51.209071] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.314 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 58411 00:25:05.314 [2024-07-25 12:06:51.209083] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1df43c0 name raid_bdev1, state offline 00:25:05.314 [2024-07-25 12:06:51.244498] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.573 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:25:05.573 00:25:05.573 real 0m31.113s 00:25:05.573 user 0m48.569s 00:25:05.573 sys 0m4.987s 00:25:05.573 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.573 12:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.573 ************************************ 00:25:05.573 END TEST raid_rebuild_test_sb_io 00:25:05.573 ************************************ 00:25:05.573 12:06:51 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:25:05.573 12:06:51 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:25:05.573 12:06:51 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:25:05.573 12:06:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:05.573 12:06:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.573 12:06:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.573 ************************************ 00:25:05.573 START TEST raid_state_function_test_sb_4k 00:25:05.573 ************************************ 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=64081 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 64081' 00:25:05.573 Process raid pid: 64081 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 64081 /var/tmp/spdk-raid.sock 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 64081 ']' 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:05.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.573 12:06:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:05.573 [2024-07-25 12:06:51.600268] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:05.573 [2024-07-25 12:06:51.600338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.573 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:05.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:05.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.574 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:05.833 [2024-07-25 12:06:51.721177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.833 [2024-07-25 12:06:51.806984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.833 [2024-07-25 12:06:51.869681] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.833 [2024-07-25 12:06:51.869716] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.401 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.401 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:25:06.401 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:06.660 [2024-07-25 12:06:52.705305] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:06.660 [2024-07-25 12:06:52.705341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:06.660 [2024-07-25 12:06:52.705351] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.660 [2024-07-25 12:06:52.705362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.660 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.919 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.919 "name": "Existed_Raid", 00:25:06.919 "uuid": "ec6c9868-9959-4ac2-b1b5-402c023a65a0", 00:25:06.919 "strip_size_kb": 0, 00:25:06.919 "state": "configuring", 00:25:06.919 "raid_level": "raid1", 00:25:06.919 "superblock": true, 00:25:06.919 "num_base_bdevs": 2, 00:25:06.919 "num_base_bdevs_discovered": 0, 00:25:06.919 "num_base_bdevs_operational": 2, 00:25:06.919 "base_bdevs_list": [ 00:25:06.919 { 00:25:06.919 "name": "BaseBdev1", 00:25:06.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.919 "is_configured": false, 00:25:06.919 "data_offset": 0, 00:25:06.919 "data_size": 0 00:25:06.919 }, 00:25:06.919 { 00:25:06.919 "name": "BaseBdev2", 00:25:06.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.919 "is_configured": false, 00:25:06.919 "data_offset": 0, 00:25:06.919 "data_size": 0 00:25:06.919 } 00:25:06.919 ] 00:25:06.919 }' 00:25:06.919 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.919 12:06:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.484 12:06:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:07.743 [2024-07-25 12:06:53.751925] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.743 [2024-07-25 12:06:53.751950] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d3f20 name Existed_Raid, state configuring 00:25:07.743 12:06:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:08.020 [2024-07-25 12:06:53.980541] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:08.020 [2024-07-25 12:06:53.980567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:08.020 [2024-07-25 12:06:53.980576] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:08.020 [2024-07-25 12:06:53.980586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:08.020 12:06:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:25:08.299 [2024-07-25 12:06:54.218563] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.299 BaseBdev1 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:08.299 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:08.557 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:08.817 [ 00:25:08.817 { 00:25:08.817 "name": "BaseBdev1", 00:25:08.817 "aliases": [ 00:25:08.817 "2bb2fe3a-2781-48f1-bd39-bf090be91f65" 00:25:08.817 ], 00:25:08.817 "product_name": "Malloc disk", 00:25:08.817 "block_size": 4096, 00:25:08.817 "num_blocks": 8192, 00:25:08.817 "uuid": "2bb2fe3a-2781-48f1-bd39-bf090be91f65", 00:25:08.817 "assigned_rate_limits": { 00:25:08.817 "rw_ios_per_sec": 0, 00:25:08.817 "rw_mbytes_per_sec": 0, 00:25:08.817 "r_mbytes_per_sec": 0, 00:25:08.817 "w_mbytes_per_sec": 0 00:25:08.817 }, 00:25:08.817 "claimed": true, 00:25:08.817 "claim_type": "exclusive_write", 00:25:08.817 "zoned": false, 00:25:08.817 "supported_io_types": { 00:25:08.817 "read": true, 00:25:08.817 "write": true, 00:25:08.817 "unmap": true, 00:25:08.817 "flush": true, 00:25:08.817 "reset": true, 00:25:08.817 "nvme_admin": false, 00:25:08.817 "nvme_io": false, 00:25:08.817 "nvme_io_md": false, 00:25:08.817 "write_zeroes": true, 00:25:08.817 "zcopy": true, 00:25:08.817 "get_zone_info": false, 00:25:08.817 "zone_management": false, 00:25:08.817 "zone_append": false, 00:25:08.817 "compare": false, 00:25:08.817 "compare_and_write": false, 00:25:08.817 "abort": true, 00:25:08.817 "seek_hole": false, 00:25:08.817 "seek_data": false, 00:25:08.817 "copy": true, 00:25:08.817 "nvme_iov_md": false 00:25:08.817 }, 00:25:08.817 "memory_domains": [ 00:25:08.817 { 00:25:08.817 "dma_device_id": "system", 00:25:08.817 "dma_device_type": 1 00:25:08.817 }, 00:25:08.817 { 00:25:08.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.817 "dma_device_type": 2 00:25:08.817 } 00:25:08.817 ], 00:25:08.817 "driver_specific": {} 00:25:08.817 } 00:25:08.817 ] 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.817 "name": "Existed_Raid", 00:25:08.817 "uuid": "b0493e7a-b076-4dd4-9247-cdfcfe5e6b98", 00:25:08.817 "strip_size_kb": 0, 00:25:08.817 "state": "configuring", 00:25:08.817 "raid_level": "raid1", 00:25:08.817 "superblock": true, 00:25:08.817 "num_base_bdevs": 2, 00:25:08.817 "num_base_bdevs_discovered": 1, 00:25:08.817 "num_base_bdevs_operational": 2, 00:25:08.817 "base_bdevs_list": [ 00:25:08.817 { 00:25:08.817 "name": "BaseBdev1", 00:25:08.817 "uuid": "2bb2fe3a-2781-48f1-bd39-bf090be91f65", 00:25:08.817 "is_configured": true, 00:25:08.817 "data_offset": 256, 00:25:08.817 "data_size": 7936 00:25:08.817 }, 00:25:08.817 { 00:25:08.817 "name": "BaseBdev2", 00:25:08.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.817 "is_configured": false, 00:25:08.817 "data_offset": 0, 00:25:08.817 "data_size": 0 00:25:08.817 } 00:25:08.817 ] 00:25:08.817 }' 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.817 12:06:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:09.385 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:09.643 [2024-07-25 12:06:55.706486] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:09.643 [2024-07-25 12:06:55.706519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d3810 name Existed_Raid, state configuring 00:25:09.643 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:09.902 [2024-07-25 12:06:55.935112] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:09.902 [2024-07-25 12:06:55.936507] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:09.902 [2024-07-25 12:06:55.936537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:09.902 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:09.902 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:09.902 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:09.902 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.902 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:09.902 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.903 12:06:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.162 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.162 "name": "Existed_Raid", 00:25:10.162 "uuid": "9beb9974-f309-4e70-8573-4806ef54db00", 00:25:10.162 "strip_size_kb": 0, 00:25:10.162 "state": "configuring", 00:25:10.162 "raid_level": "raid1", 00:25:10.162 "superblock": true, 00:25:10.162 "num_base_bdevs": 2, 00:25:10.162 "num_base_bdevs_discovered": 1, 00:25:10.162 "num_base_bdevs_operational": 2, 00:25:10.162 "base_bdevs_list": [ 00:25:10.162 { 00:25:10.162 "name": "BaseBdev1", 00:25:10.162 "uuid": "2bb2fe3a-2781-48f1-bd39-bf090be91f65", 00:25:10.162 "is_configured": true, 00:25:10.162 "data_offset": 256, 00:25:10.162 "data_size": 7936 00:25:10.162 }, 00:25:10.162 { 00:25:10.162 "name": "BaseBdev2", 00:25:10.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.162 "is_configured": false, 00:25:10.162 "data_offset": 0, 00:25:10.162 "data_size": 0 00:25:10.162 } 00:25:10.162 ] 00:25:10.162 }' 00:25:10.162 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.162 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:10.729 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:25:10.987 [2024-07-25 12:06:56.980957] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:10.987 [2024-07-25 12:06:56.981091] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x15d4600 00:25:10.987 [2024-07-25 12:06:56.981105] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:10.987 [2024-07-25 12:06:56.981274] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15d59c0 00:25:10.987 [2024-07-25 12:06:56.981393] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15d4600 00:25:10.987 [2024-07-25 12:06:56.981404] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15d4600 00:25:10.987 [2024-07-25 12:06:56.981490] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.987 BaseBdev2 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:10.987 12:06:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:11.245 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:11.504 [ 00:25:11.504 { 00:25:11.504 "name": "BaseBdev2", 00:25:11.504 "aliases": [ 00:25:11.504 "e277a6ba-ee35-4ca7-9319-8e24a23758db" 00:25:11.504 ], 00:25:11.504 "product_name": "Malloc disk", 00:25:11.504 "block_size": 4096, 00:25:11.504 "num_blocks": 8192, 00:25:11.504 "uuid": "e277a6ba-ee35-4ca7-9319-8e24a23758db", 00:25:11.504 "assigned_rate_limits": { 00:25:11.504 "rw_ios_per_sec": 0, 00:25:11.504 "rw_mbytes_per_sec": 0, 00:25:11.504 "r_mbytes_per_sec": 0, 00:25:11.504 "w_mbytes_per_sec": 0 00:25:11.504 }, 00:25:11.504 "claimed": true, 00:25:11.504 "claim_type": "exclusive_write", 00:25:11.504 "zoned": false, 00:25:11.504 "supported_io_types": { 00:25:11.504 "read": true, 00:25:11.504 "write": true, 00:25:11.504 "unmap": true, 00:25:11.504 "flush": true, 00:25:11.504 "reset": true, 00:25:11.504 "nvme_admin": false, 00:25:11.504 "nvme_io": false, 00:25:11.504 "nvme_io_md": false, 00:25:11.504 "write_zeroes": true, 00:25:11.504 "zcopy": true, 00:25:11.504 "get_zone_info": false, 00:25:11.504 "zone_management": false, 00:25:11.504 "zone_append": false, 00:25:11.504 "compare": false, 00:25:11.504 "compare_and_write": false, 00:25:11.504 "abort": true, 00:25:11.504 "seek_hole": false, 00:25:11.504 "seek_data": false, 00:25:11.504 "copy": true, 00:25:11.504 "nvme_iov_md": false 00:25:11.504 }, 00:25:11.504 "memory_domains": [ 00:25:11.504 { 00:25:11.504 "dma_device_id": "system", 00:25:11.504 "dma_device_type": 1 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.504 "dma_device_type": 2 00:25:11.504 } 00:25:11.504 ], 00:25:11.504 "driver_specific": {} 00:25:11.504 } 00:25:11.504 ] 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.504 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.763 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.763 "name": "Existed_Raid", 00:25:11.763 "uuid": "9beb9974-f309-4e70-8573-4806ef54db00", 00:25:11.763 "strip_size_kb": 0, 00:25:11.763 "state": "online", 00:25:11.763 "raid_level": "raid1", 00:25:11.763 "superblock": true, 00:25:11.763 "num_base_bdevs": 2, 00:25:11.763 "num_base_bdevs_discovered": 2, 00:25:11.763 "num_base_bdevs_operational": 2, 00:25:11.763 "base_bdevs_list": [ 00:25:11.763 { 00:25:11.763 "name": "BaseBdev1", 00:25:11.763 "uuid": "2bb2fe3a-2781-48f1-bd39-bf090be91f65", 00:25:11.763 "is_configured": true, 00:25:11.763 "data_offset": 256, 00:25:11.763 "data_size": 7936 00:25:11.763 }, 00:25:11.763 { 00:25:11.763 "name": "BaseBdev2", 00:25:11.763 "uuid": "e277a6ba-ee35-4ca7-9319-8e24a23758db", 00:25:11.763 "is_configured": true, 00:25:11.763 "data_offset": 256, 00:25:11.763 "data_size": 7936 00:25:11.763 } 00:25:11.763 ] 00:25:11.763 }' 00:25:11.763 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.763 12:06:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:12.330 [2024-07-25 12:06:58.425211] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.330 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:12.330 "name": "Existed_Raid", 00:25:12.330 "aliases": [ 00:25:12.330 "9beb9974-f309-4e70-8573-4806ef54db00" 00:25:12.330 ], 00:25:12.331 "product_name": "Raid Volume", 00:25:12.331 "block_size": 4096, 00:25:12.331 "num_blocks": 7936, 00:25:12.331 "uuid": "9beb9974-f309-4e70-8573-4806ef54db00", 00:25:12.331 "assigned_rate_limits": { 00:25:12.331 "rw_ios_per_sec": 0, 00:25:12.331 "rw_mbytes_per_sec": 0, 00:25:12.331 "r_mbytes_per_sec": 0, 00:25:12.331 "w_mbytes_per_sec": 0 00:25:12.331 }, 00:25:12.331 "claimed": false, 00:25:12.331 "zoned": false, 00:25:12.331 "supported_io_types": { 00:25:12.331 "read": true, 00:25:12.331 "write": true, 00:25:12.331 "unmap": false, 00:25:12.331 "flush": false, 00:25:12.331 "reset": true, 00:25:12.331 "nvme_admin": false, 00:25:12.331 "nvme_io": false, 00:25:12.331 "nvme_io_md": false, 00:25:12.331 "write_zeroes": true, 00:25:12.331 "zcopy": false, 00:25:12.331 "get_zone_info": false, 00:25:12.331 "zone_management": false, 00:25:12.331 "zone_append": false, 00:25:12.331 "compare": false, 00:25:12.331 "compare_and_write": false, 00:25:12.331 "abort": false, 00:25:12.331 "seek_hole": false, 00:25:12.331 "seek_data": false, 00:25:12.331 "copy": false, 00:25:12.331 "nvme_iov_md": false 00:25:12.331 }, 00:25:12.331 "memory_domains": [ 00:25:12.331 { 00:25:12.331 "dma_device_id": "system", 00:25:12.331 "dma_device_type": 1 00:25:12.331 }, 00:25:12.331 { 00:25:12.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.331 "dma_device_type": 2 00:25:12.331 }, 00:25:12.331 { 00:25:12.331 "dma_device_id": "system", 00:25:12.331 "dma_device_type": 1 00:25:12.331 }, 00:25:12.331 { 00:25:12.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.331 "dma_device_type": 2 00:25:12.331 } 00:25:12.331 ], 00:25:12.331 "driver_specific": { 00:25:12.331 "raid": { 00:25:12.331 "uuid": "9beb9974-f309-4e70-8573-4806ef54db00", 00:25:12.331 "strip_size_kb": 0, 00:25:12.331 "state": "online", 00:25:12.331 "raid_level": "raid1", 00:25:12.331 "superblock": true, 00:25:12.331 "num_base_bdevs": 2, 00:25:12.331 "num_base_bdevs_discovered": 2, 00:25:12.331 "num_base_bdevs_operational": 2, 00:25:12.331 "base_bdevs_list": [ 00:25:12.331 { 00:25:12.331 "name": "BaseBdev1", 00:25:12.331 "uuid": "2bb2fe3a-2781-48f1-bd39-bf090be91f65", 00:25:12.331 "is_configured": true, 00:25:12.331 "data_offset": 256, 00:25:12.331 "data_size": 7936 00:25:12.331 }, 00:25:12.331 { 00:25:12.331 "name": "BaseBdev2", 00:25:12.331 "uuid": "e277a6ba-ee35-4ca7-9319-8e24a23758db", 00:25:12.331 "is_configured": true, 00:25:12.331 "data_offset": 256, 00:25:12.331 "data_size": 7936 00:25:12.331 } 00:25:12.331 ] 00:25:12.331 } 00:25:12.331 } 00:25:12.331 }' 00:25:12.589 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:12.589 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:12.589 BaseBdev2' 00:25:12.589 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.589 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.589 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:12.848 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.848 "name": "BaseBdev1", 00:25:12.848 "aliases": [ 00:25:12.848 "2bb2fe3a-2781-48f1-bd39-bf090be91f65" 00:25:12.848 ], 00:25:12.848 "product_name": "Malloc disk", 00:25:12.848 "block_size": 4096, 00:25:12.848 "num_blocks": 8192, 00:25:12.848 "uuid": "2bb2fe3a-2781-48f1-bd39-bf090be91f65", 00:25:12.848 "assigned_rate_limits": { 00:25:12.848 "rw_ios_per_sec": 0, 00:25:12.848 "rw_mbytes_per_sec": 0, 00:25:12.848 "r_mbytes_per_sec": 0, 00:25:12.848 "w_mbytes_per_sec": 0 00:25:12.848 }, 00:25:12.848 "claimed": true, 00:25:12.848 "claim_type": "exclusive_write", 00:25:12.848 "zoned": false, 00:25:12.848 "supported_io_types": { 00:25:12.848 "read": true, 00:25:12.848 "write": true, 00:25:12.848 "unmap": true, 00:25:12.848 "flush": true, 00:25:12.848 "reset": true, 00:25:12.848 "nvme_admin": false, 00:25:12.848 "nvme_io": false, 00:25:12.848 "nvme_io_md": false, 00:25:12.848 "write_zeroes": true, 00:25:12.849 "zcopy": true, 00:25:12.849 "get_zone_info": false, 00:25:12.849 "zone_management": false, 00:25:12.849 "zone_append": false, 00:25:12.849 "compare": false, 00:25:12.849 "compare_and_write": false, 00:25:12.849 "abort": true, 00:25:12.849 "seek_hole": false, 00:25:12.849 "seek_data": false, 00:25:12.849 "copy": true, 00:25:12.849 "nvme_iov_md": false 00:25:12.849 }, 00:25:12.849 "memory_domains": [ 00:25:12.849 { 00:25:12.849 "dma_device_id": "system", 00:25:12.849 "dma_device_type": 1 00:25:12.849 }, 00:25:12.849 { 00:25:12.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.849 "dma_device_type": 2 00:25:12.849 } 00:25:12.849 ], 00:25:12.849 "driver_specific": {} 00:25:12.849 }' 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.849 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.107 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.107 12:06:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.107 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.107 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.107 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.107 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:13.107 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.367 "name": "BaseBdev2", 00:25:13.367 "aliases": [ 00:25:13.367 "e277a6ba-ee35-4ca7-9319-8e24a23758db" 00:25:13.367 ], 00:25:13.367 "product_name": "Malloc disk", 00:25:13.367 "block_size": 4096, 00:25:13.367 "num_blocks": 8192, 00:25:13.367 "uuid": "e277a6ba-ee35-4ca7-9319-8e24a23758db", 00:25:13.367 "assigned_rate_limits": { 00:25:13.367 "rw_ios_per_sec": 0, 00:25:13.367 "rw_mbytes_per_sec": 0, 00:25:13.367 "r_mbytes_per_sec": 0, 00:25:13.367 "w_mbytes_per_sec": 0 00:25:13.367 }, 00:25:13.367 "claimed": true, 00:25:13.367 "claim_type": "exclusive_write", 00:25:13.367 "zoned": false, 00:25:13.367 "supported_io_types": { 00:25:13.367 "read": true, 00:25:13.367 "write": true, 00:25:13.367 "unmap": true, 00:25:13.367 "flush": true, 00:25:13.367 "reset": true, 00:25:13.367 "nvme_admin": false, 00:25:13.367 "nvme_io": false, 00:25:13.367 "nvme_io_md": false, 00:25:13.367 "write_zeroes": true, 00:25:13.367 "zcopy": true, 00:25:13.367 "get_zone_info": false, 00:25:13.367 "zone_management": false, 00:25:13.367 "zone_append": false, 00:25:13.367 "compare": false, 00:25:13.367 "compare_and_write": false, 00:25:13.367 "abort": true, 00:25:13.367 "seek_hole": false, 00:25:13.367 "seek_data": false, 00:25:13.367 "copy": true, 00:25:13.367 "nvme_iov_md": false 00:25:13.367 }, 00:25:13.367 "memory_domains": [ 00:25:13.367 { 00:25:13.367 "dma_device_id": "system", 00:25:13.367 "dma_device_type": 1 00:25:13.367 }, 00:25:13.367 { 00:25:13.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.367 "dma_device_type": 2 00:25:13.367 } 00:25:13.367 ], 00:25:13.367 "driver_specific": {} 00:25:13.367 }' 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.367 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.626 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.626 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.626 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.626 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.626 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.626 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:13.884 [2024-07-25 12:06:59.836735] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.884 12:06:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.143 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.143 "name": "Existed_Raid", 00:25:14.143 "uuid": "9beb9974-f309-4e70-8573-4806ef54db00", 00:25:14.143 "strip_size_kb": 0, 00:25:14.143 "state": "online", 00:25:14.143 "raid_level": "raid1", 00:25:14.143 "superblock": true, 00:25:14.143 "num_base_bdevs": 2, 00:25:14.143 "num_base_bdevs_discovered": 1, 00:25:14.143 "num_base_bdevs_operational": 1, 00:25:14.143 "base_bdevs_list": [ 00:25:14.143 { 00:25:14.143 "name": null, 00:25:14.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.143 "is_configured": false, 00:25:14.143 "data_offset": 256, 00:25:14.143 "data_size": 7936 00:25:14.143 }, 00:25:14.143 { 00:25:14.143 "name": "BaseBdev2", 00:25:14.143 "uuid": "e277a6ba-ee35-4ca7-9319-8e24a23758db", 00:25:14.143 "is_configured": true, 00:25:14.143 "data_offset": 256, 00:25:14.143 "data_size": 7936 00:25:14.143 } 00:25:14.143 ] 00:25:14.143 }' 00:25:14.143 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.143 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:14.709 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:14.709 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:14.709 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.709 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:14.967 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:14.967 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:14.967 12:07:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:14.967 [2024-07-25 12:07:01.084995] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:14.967 [2024-07-25 12:07:01.085069] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.226 [2024-07-25 12:07:01.095204] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.226 [2024-07-25 12:07:01.095234] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.226 [2024-07-25 12:07:01.095245] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d4600 name Existed_Raid, state offline 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 64081 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 64081 ']' 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 64081 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:25:15.226 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.485 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64081 00:25:15.485 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.485 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.485 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64081' 00:25:15.485 killing process with pid 64081 00:25:15.485 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 64081 00:25:15.485 [2024-07-25 12:07:01.391911] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:15.485 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 64081 00:25:15.486 [2024-07-25 12:07:01.392744] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:15.486 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:25:15.486 00:25:15.486 real 0m10.049s 00:25:15.486 user 0m17.795s 00:25:15.486 sys 0m1.951s 00:25:15.486 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:15.486 12:07:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.486 ************************************ 00:25:15.486 END TEST raid_state_function_test_sb_4k 00:25:15.486 ************************************ 00:25:15.744 12:07:01 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:25:15.744 12:07:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:15.744 12:07:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.744 12:07:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:15.744 ************************************ 00:25:15.744 START TEST raid_superblock_test_4k 00:25:15.744 ************************************ 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65904 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65904 /var/tmp/spdk-raid.sock 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 65904 ']' 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:15.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.744 12:07:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.744 [2024-07-25 12:07:01.714394] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:15.744 [2024-07-25 12:07:01.714452] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65904 ] 00:25:15.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:15.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:15.745 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:15.745 [2024-07-25 12:07:01.846475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.003 [2024-07-25 12:07:01.933663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.003 [2024-07-25 12:07:01.990660] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:16.003 [2024-07-25 12:07:01.990696] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:16.570 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:25:16.828 malloc1 00:25:16.828 12:07:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:17.131 [2024-07-25 12:07:03.059346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:17.131 [2024-07-25 12:07:03.059387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.131 [2024-07-25 12:07:03.059406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d892f0 00:25:17.131 [2024-07-25 12:07:03.059418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.131 [2024-07-25 12:07:03.060910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.131 [2024-07-25 12:07:03.060937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:17.131 pt1 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:17.131 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:25:17.389 malloc2 00:25:17.389 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:17.648 [2024-07-25 12:07:03.520936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:17.648 [2024-07-25 12:07:03.520977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.648 [2024-07-25 12:07:03.520997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d8a6d0 00:25:17.648 [2024-07-25 12:07:03.521008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.648 [2024-07-25 12:07:03.522458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.648 [2024-07-25 12:07:03.522486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:17.648 pt2 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:17.648 [2024-07-25 12:07:03.745548] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:17.648 [2024-07-25 12:07:03.746690] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:17.648 [2024-07-25 12:07:03.746823] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f23310 00:25:17.648 [2024-07-25 12:07:03.746836] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:17.648 [2024-07-25 12:07:03.747016] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f22ce0 00:25:17.648 [2024-07-25 12:07:03.747155] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f23310 00:25:17.648 [2024-07-25 12:07:03.747166] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f23310 00:25:17.648 [2024-07-25 12:07:03.747257] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.648 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.907 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.907 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.907 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.907 "name": "raid_bdev1", 00:25:17.907 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:17.907 "strip_size_kb": 0, 00:25:17.907 "state": "online", 00:25:17.907 "raid_level": "raid1", 00:25:17.907 "superblock": true, 00:25:17.907 "num_base_bdevs": 2, 00:25:17.907 "num_base_bdevs_discovered": 2, 00:25:17.907 "num_base_bdevs_operational": 2, 00:25:17.907 "base_bdevs_list": [ 00:25:17.907 { 00:25:17.907 "name": "pt1", 00:25:17.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:17.907 "is_configured": true, 00:25:17.907 "data_offset": 256, 00:25:17.907 "data_size": 7936 00:25:17.907 }, 00:25:17.907 { 00:25:17.907 "name": "pt2", 00:25:17.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:17.907 "is_configured": true, 00:25:17.907 "data_offset": 256, 00:25:17.907 "data_size": 7936 00:25:17.907 } 00:25:17.907 ] 00:25:17.907 }' 00:25:17.907 12:07:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.907 12:07:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:18.474 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:18.732 [2024-07-25 12:07:04.784653] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.732 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:18.732 "name": "raid_bdev1", 00:25:18.732 "aliases": [ 00:25:18.732 "b8b9c045-1b59-4e78-b31c-a2821f788c3d" 00:25:18.732 ], 00:25:18.732 "product_name": "Raid Volume", 00:25:18.732 "block_size": 4096, 00:25:18.732 "num_blocks": 7936, 00:25:18.732 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:18.732 "assigned_rate_limits": { 00:25:18.732 "rw_ios_per_sec": 0, 00:25:18.732 "rw_mbytes_per_sec": 0, 00:25:18.732 "r_mbytes_per_sec": 0, 00:25:18.732 "w_mbytes_per_sec": 0 00:25:18.732 }, 00:25:18.732 "claimed": false, 00:25:18.732 "zoned": false, 00:25:18.732 "supported_io_types": { 00:25:18.732 "read": true, 00:25:18.732 "write": true, 00:25:18.732 "unmap": false, 00:25:18.732 "flush": false, 00:25:18.732 "reset": true, 00:25:18.732 "nvme_admin": false, 00:25:18.732 "nvme_io": false, 00:25:18.732 "nvme_io_md": false, 00:25:18.732 "write_zeroes": true, 00:25:18.732 "zcopy": false, 00:25:18.732 "get_zone_info": false, 00:25:18.732 "zone_management": false, 00:25:18.732 "zone_append": false, 00:25:18.732 "compare": false, 00:25:18.732 "compare_and_write": false, 00:25:18.732 "abort": false, 00:25:18.732 "seek_hole": false, 00:25:18.732 "seek_data": false, 00:25:18.732 "copy": false, 00:25:18.732 "nvme_iov_md": false 00:25:18.732 }, 00:25:18.732 "memory_domains": [ 00:25:18.732 { 00:25:18.732 "dma_device_id": "system", 00:25:18.732 "dma_device_type": 1 00:25:18.732 }, 00:25:18.732 { 00:25:18.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.732 "dma_device_type": 2 00:25:18.732 }, 00:25:18.732 { 00:25:18.732 "dma_device_id": "system", 00:25:18.732 "dma_device_type": 1 00:25:18.732 }, 00:25:18.732 { 00:25:18.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.732 "dma_device_type": 2 00:25:18.732 } 00:25:18.732 ], 00:25:18.732 "driver_specific": { 00:25:18.732 "raid": { 00:25:18.732 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:18.732 "strip_size_kb": 0, 00:25:18.732 "state": "online", 00:25:18.732 "raid_level": "raid1", 00:25:18.732 "superblock": true, 00:25:18.732 "num_base_bdevs": 2, 00:25:18.732 "num_base_bdevs_discovered": 2, 00:25:18.732 "num_base_bdevs_operational": 2, 00:25:18.732 "base_bdevs_list": [ 00:25:18.732 { 00:25:18.732 "name": "pt1", 00:25:18.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:18.732 "is_configured": true, 00:25:18.732 "data_offset": 256, 00:25:18.732 "data_size": 7936 00:25:18.732 }, 00:25:18.732 { 00:25:18.732 "name": "pt2", 00:25:18.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:18.732 "is_configured": true, 00:25:18.732 "data_offset": 256, 00:25:18.732 "data_size": 7936 00:25:18.732 } 00:25:18.732 ] 00:25:18.732 } 00:25:18.732 } 00:25:18.732 }' 00:25:18.732 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:18.991 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:18.991 pt2' 00:25:18.991 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:18.991 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:18.991 12:07:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:18.991 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:18.991 "name": "pt1", 00:25:18.991 "aliases": [ 00:25:18.991 "00000000-0000-0000-0000-000000000001" 00:25:18.991 ], 00:25:18.991 "product_name": "passthru", 00:25:18.991 "block_size": 4096, 00:25:18.991 "num_blocks": 8192, 00:25:18.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:18.991 "assigned_rate_limits": { 00:25:18.991 "rw_ios_per_sec": 0, 00:25:18.991 "rw_mbytes_per_sec": 0, 00:25:18.991 "r_mbytes_per_sec": 0, 00:25:18.991 "w_mbytes_per_sec": 0 00:25:18.991 }, 00:25:18.991 "claimed": true, 00:25:18.991 "claim_type": "exclusive_write", 00:25:18.991 "zoned": false, 00:25:18.991 "supported_io_types": { 00:25:18.991 "read": true, 00:25:18.991 "write": true, 00:25:18.991 "unmap": true, 00:25:18.991 "flush": true, 00:25:18.991 "reset": true, 00:25:18.991 "nvme_admin": false, 00:25:18.991 "nvme_io": false, 00:25:18.991 "nvme_io_md": false, 00:25:18.991 "write_zeroes": true, 00:25:18.991 "zcopy": true, 00:25:18.991 "get_zone_info": false, 00:25:18.991 "zone_management": false, 00:25:18.991 "zone_append": false, 00:25:18.991 "compare": false, 00:25:18.991 "compare_and_write": false, 00:25:18.991 "abort": true, 00:25:18.991 "seek_hole": false, 00:25:18.991 "seek_data": false, 00:25:18.991 "copy": true, 00:25:18.991 "nvme_iov_md": false 00:25:18.991 }, 00:25:18.991 "memory_domains": [ 00:25:18.991 { 00:25:18.991 "dma_device_id": "system", 00:25:18.991 "dma_device_type": 1 00:25:18.991 }, 00:25:18.991 { 00:25:18.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.991 "dma_device_type": 2 00:25:18.991 } 00:25:18.991 ], 00:25:18.991 "driver_specific": { 00:25:18.991 "passthru": { 00:25:18.991 "name": "pt1", 00:25:18.991 "base_bdev_name": "malloc1" 00:25:18.991 } 00:25:18.991 } 00:25:18.991 }' 00:25:18.991 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:19.249 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:19.508 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:19.508 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:19.508 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:19.508 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:19.508 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:19.766 "name": "pt2", 00:25:19.766 "aliases": [ 00:25:19.766 "00000000-0000-0000-0000-000000000002" 00:25:19.766 ], 00:25:19.766 "product_name": "passthru", 00:25:19.766 "block_size": 4096, 00:25:19.766 "num_blocks": 8192, 00:25:19.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.766 "assigned_rate_limits": { 00:25:19.766 "rw_ios_per_sec": 0, 00:25:19.766 "rw_mbytes_per_sec": 0, 00:25:19.766 "r_mbytes_per_sec": 0, 00:25:19.766 "w_mbytes_per_sec": 0 00:25:19.766 }, 00:25:19.766 "claimed": true, 00:25:19.766 "claim_type": "exclusive_write", 00:25:19.766 "zoned": false, 00:25:19.766 "supported_io_types": { 00:25:19.766 "read": true, 00:25:19.766 "write": true, 00:25:19.766 "unmap": true, 00:25:19.766 "flush": true, 00:25:19.766 "reset": true, 00:25:19.766 "nvme_admin": false, 00:25:19.766 "nvme_io": false, 00:25:19.766 "nvme_io_md": false, 00:25:19.766 "write_zeroes": true, 00:25:19.766 "zcopy": true, 00:25:19.766 "get_zone_info": false, 00:25:19.766 "zone_management": false, 00:25:19.766 "zone_append": false, 00:25:19.766 "compare": false, 00:25:19.766 "compare_and_write": false, 00:25:19.766 "abort": true, 00:25:19.766 "seek_hole": false, 00:25:19.766 "seek_data": false, 00:25:19.766 "copy": true, 00:25:19.766 "nvme_iov_md": false 00:25:19.766 }, 00:25:19.766 "memory_domains": [ 00:25:19.766 { 00:25:19.766 "dma_device_id": "system", 00:25:19.766 "dma_device_type": 1 00:25:19.766 }, 00:25:19.766 { 00:25:19.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.766 "dma_device_type": 2 00:25:19.766 } 00:25:19.766 ], 00:25:19.766 "driver_specific": { 00:25:19.766 "passthru": { 00:25:19.766 "name": "pt2", 00:25:19.766 "base_bdev_name": "malloc2" 00:25:19.766 } 00:25:19.766 } 00:25:19.766 }' 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:19.766 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.025 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:20.025 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.025 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.025 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:20.025 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:20.025 12:07:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:20.284 [2024-07-25 12:07:06.204391] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.284 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b8b9c045-1b59-4e78-b31c-a2821f788c3d 00:25:20.284 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z b8b9c045-1b59-4e78-b31c-a2821f788c3d ']' 00:25:20.284 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:20.543 [2024-07-25 12:07:06.432762] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.543 [2024-07-25 12:07:06.432781] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.543 [2024-07-25 12:07:06.432830] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.543 [2024-07-25 12:07:06.432878] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.543 [2024-07-25 12:07:06.432889] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f23310 name raid_bdev1, state offline 00:25:20.543 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.543 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:20.801 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:20.801 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:20.801 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:20.801 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:20.801 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:20.801 12:07:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:21.059 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:21.059 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:21.316 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:21.575 [2024-07-25 12:07:07.579734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:21.575 [2024-07-25 12:07:07.580965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:21.575 [2024-07-25 12:07:07.581015] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:21.575 [2024-07-25 12:07:07.581051] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:21.575 [2024-07-25 12:07:07.581068] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.575 [2024-07-25 12:07:07.581077] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f2c3f0 name raid_bdev1, state configuring 00:25:21.575 request: 00:25:21.575 { 00:25:21.575 "name": "raid_bdev1", 00:25:21.575 "raid_level": "raid1", 00:25:21.575 "base_bdevs": [ 00:25:21.575 "malloc1", 00:25:21.575 "malloc2" 00:25:21.575 ], 00:25:21.575 "superblock": false, 00:25:21.575 "method": "bdev_raid_create", 00:25:21.575 "req_id": 1 00:25:21.575 } 00:25:21.575 Got JSON-RPC error response 00:25:21.575 response: 00:25:21.575 { 00:25:21.575 "code": -17, 00:25:21.575 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:21.575 } 00:25:21.575 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:25:21.575 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:21.575 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:21.575 12:07:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:21.575 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.575 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:21.834 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:21.834 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:21.834 12:07:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:22.137 [2024-07-25 12:07:08.037081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:22.137 [2024-07-25 12:07:08.037122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.137 [2024-07-25 12:07:08.037145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f2cd70 00:25:22.137 [2024-07-25 12:07:08.037158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.137 [2024-07-25 12:07:08.038645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.137 [2024-07-25 12:07:08.038671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:22.137 [2024-07-25 12:07:08.038729] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:22.137 [2024-07-25 12:07:08.038758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:22.137 pt1 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.137 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.395 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.395 "name": "raid_bdev1", 00:25:22.395 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:22.395 "strip_size_kb": 0, 00:25:22.395 "state": "configuring", 00:25:22.395 "raid_level": "raid1", 00:25:22.395 "superblock": true, 00:25:22.395 "num_base_bdevs": 2, 00:25:22.395 "num_base_bdevs_discovered": 1, 00:25:22.395 "num_base_bdevs_operational": 2, 00:25:22.395 "base_bdevs_list": [ 00:25:22.395 { 00:25:22.395 "name": "pt1", 00:25:22.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:22.395 "is_configured": true, 00:25:22.395 "data_offset": 256, 00:25:22.395 "data_size": 7936 00:25:22.395 }, 00:25:22.395 { 00:25:22.395 "name": null, 00:25:22.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:22.395 "is_configured": false, 00:25:22.395 "data_offset": 256, 00:25:22.395 "data_size": 7936 00:25:22.395 } 00:25:22.395 ] 00:25:22.395 }' 00:25:22.395 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.395 12:07:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:22.961 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:25:22.961 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:22.961 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:22.961 12:07:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:22.961 [2024-07-25 12:07:09.075839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:22.961 [2024-07-25 12:07:09.075881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.961 [2024-07-25 12:07:09.075896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f23bb0 00:25:22.961 [2024-07-25 12:07:09.075907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.961 [2024-07-25 12:07:09.076225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.961 [2024-07-25 12:07:09.076242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:22.961 [2024-07-25 12:07:09.076297] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:22.961 [2024-07-25 12:07:09.076316] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:22.961 [2024-07-25 12:07:09.076404] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f21de0 00:25:22.961 [2024-07-25 12:07:09.076413] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:22.961 [2024-07-25 12:07:09.076563] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d82eb0 00:25:22.961 [2024-07-25 12:07:09.076683] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f21de0 00:25:22.961 [2024-07-25 12:07:09.076696] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f21de0 00:25:22.961 [2024-07-25 12:07:09.076784] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.219 pt2 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.219 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.219 "name": "raid_bdev1", 00:25:23.219 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:23.219 "strip_size_kb": 0, 00:25:23.219 "state": "online", 00:25:23.220 "raid_level": "raid1", 00:25:23.220 "superblock": true, 00:25:23.220 "num_base_bdevs": 2, 00:25:23.220 "num_base_bdevs_discovered": 2, 00:25:23.220 "num_base_bdevs_operational": 2, 00:25:23.220 "base_bdevs_list": [ 00:25:23.220 { 00:25:23.220 "name": "pt1", 00:25:23.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:23.220 "is_configured": true, 00:25:23.220 "data_offset": 256, 00:25:23.220 "data_size": 7936 00:25:23.220 }, 00:25:23.220 { 00:25:23.220 "name": "pt2", 00:25:23.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:23.220 "is_configured": true, 00:25:23.220 "data_offset": 256, 00:25:23.220 "data_size": 7936 00:25:23.220 } 00:25:23.220 ] 00:25:23.220 }' 00:25:23.220 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.220 12:07:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:23.787 12:07:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:24.045 [2024-07-25 12:07:10.082711] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:24.045 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:24.045 "name": "raid_bdev1", 00:25:24.045 "aliases": [ 00:25:24.045 "b8b9c045-1b59-4e78-b31c-a2821f788c3d" 00:25:24.046 ], 00:25:24.046 "product_name": "Raid Volume", 00:25:24.046 "block_size": 4096, 00:25:24.046 "num_blocks": 7936, 00:25:24.046 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:24.046 "assigned_rate_limits": { 00:25:24.046 "rw_ios_per_sec": 0, 00:25:24.046 "rw_mbytes_per_sec": 0, 00:25:24.046 "r_mbytes_per_sec": 0, 00:25:24.046 "w_mbytes_per_sec": 0 00:25:24.046 }, 00:25:24.046 "claimed": false, 00:25:24.046 "zoned": false, 00:25:24.046 "supported_io_types": { 00:25:24.046 "read": true, 00:25:24.046 "write": true, 00:25:24.046 "unmap": false, 00:25:24.046 "flush": false, 00:25:24.046 "reset": true, 00:25:24.046 "nvme_admin": false, 00:25:24.046 "nvme_io": false, 00:25:24.046 "nvme_io_md": false, 00:25:24.046 "write_zeroes": true, 00:25:24.046 "zcopy": false, 00:25:24.046 "get_zone_info": false, 00:25:24.046 "zone_management": false, 00:25:24.046 "zone_append": false, 00:25:24.046 "compare": false, 00:25:24.046 "compare_and_write": false, 00:25:24.046 "abort": false, 00:25:24.046 "seek_hole": false, 00:25:24.046 "seek_data": false, 00:25:24.046 "copy": false, 00:25:24.046 "nvme_iov_md": false 00:25:24.046 }, 00:25:24.046 "memory_domains": [ 00:25:24.046 { 00:25:24.046 "dma_device_id": "system", 00:25:24.046 "dma_device_type": 1 00:25:24.046 }, 00:25:24.046 { 00:25:24.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.046 "dma_device_type": 2 00:25:24.046 }, 00:25:24.046 { 00:25:24.046 "dma_device_id": "system", 00:25:24.046 "dma_device_type": 1 00:25:24.046 }, 00:25:24.046 { 00:25:24.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.046 "dma_device_type": 2 00:25:24.046 } 00:25:24.046 ], 00:25:24.046 "driver_specific": { 00:25:24.046 "raid": { 00:25:24.046 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:24.046 "strip_size_kb": 0, 00:25:24.046 "state": "online", 00:25:24.046 "raid_level": "raid1", 00:25:24.046 "superblock": true, 00:25:24.046 "num_base_bdevs": 2, 00:25:24.046 "num_base_bdevs_discovered": 2, 00:25:24.046 "num_base_bdevs_operational": 2, 00:25:24.046 "base_bdevs_list": [ 00:25:24.046 { 00:25:24.046 "name": "pt1", 00:25:24.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:24.046 "is_configured": true, 00:25:24.046 "data_offset": 256, 00:25:24.046 "data_size": 7936 00:25:24.046 }, 00:25:24.046 { 00:25:24.046 "name": "pt2", 00:25:24.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:24.046 "is_configured": true, 00:25:24.046 "data_offset": 256, 00:25:24.046 "data_size": 7936 00:25:24.046 } 00:25:24.046 ] 00:25:24.046 } 00:25:24.046 } 00:25:24.046 }' 00:25:24.046 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:24.046 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:24.046 pt2' 00:25:24.046 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:24.046 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:24.046 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.304 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:24.304 "name": "pt1", 00:25:24.304 "aliases": [ 00:25:24.304 "00000000-0000-0000-0000-000000000001" 00:25:24.304 ], 00:25:24.304 "product_name": "passthru", 00:25:24.304 "block_size": 4096, 00:25:24.304 "num_blocks": 8192, 00:25:24.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:24.304 "assigned_rate_limits": { 00:25:24.304 "rw_ios_per_sec": 0, 00:25:24.304 "rw_mbytes_per_sec": 0, 00:25:24.304 "r_mbytes_per_sec": 0, 00:25:24.304 "w_mbytes_per_sec": 0 00:25:24.304 }, 00:25:24.304 "claimed": true, 00:25:24.304 "claim_type": "exclusive_write", 00:25:24.304 "zoned": false, 00:25:24.304 "supported_io_types": { 00:25:24.304 "read": true, 00:25:24.304 "write": true, 00:25:24.304 "unmap": true, 00:25:24.304 "flush": true, 00:25:24.304 "reset": true, 00:25:24.304 "nvme_admin": false, 00:25:24.304 "nvme_io": false, 00:25:24.304 "nvme_io_md": false, 00:25:24.304 "write_zeroes": true, 00:25:24.304 "zcopy": true, 00:25:24.304 "get_zone_info": false, 00:25:24.304 "zone_management": false, 00:25:24.304 "zone_append": false, 00:25:24.304 "compare": false, 00:25:24.304 "compare_and_write": false, 00:25:24.304 "abort": true, 00:25:24.304 "seek_hole": false, 00:25:24.304 "seek_data": false, 00:25:24.304 "copy": true, 00:25:24.304 "nvme_iov_md": false 00:25:24.304 }, 00:25:24.304 "memory_domains": [ 00:25:24.304 { 00:25:24.304 "dma_device_id": "system", 00:25:24.304 "dma_device_type": 1 00:25:24.304 }, 00:25:24.304 { 00:25:24.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.304 "dma_device_type": 2 00:25:24.304 } 00:25:24.304 ], 00:25:24.304 "driver_specific": { 00:25:24.304 "passthru": { 00:25:24.304 "name": "pt1", 00:25:24.304 "base_bdev_name": "malloc1" 00:25:24.304 } 00:25:24.304 } 00:25:24.304 }' 00:25:24.304 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.562 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.562 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:24.562 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.563 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.563 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:24.563 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.563 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.563 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:24.563 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.821 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.821 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:24.821 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:24.821 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.821 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:25.079 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:25.079 "name": "pt2", 00:25:25.079 "aliases": [ 00:25:25.079 "00000000-0000-0000-0000-000000000002" 00:25:25.079 ], 00:25:25.079 "product_name": "passthru", 00:25:25.079 "block_size": 4096, 00:25:25.079 "num_blocks": 8192, 00:25:25.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:25.079 "assigned_rate_limits": { 00:25:25.079 "rw_ios_per_sec": 0, 00:25:25.079 "rw_mbytes_per_sec": 0, 00:25:25.079 "r_mbytes_per_sec": 0, 00:25:25.079 "w_mbytes_per_sec": 0 00:25:25.079 }, 00:25:25.079 "claimed": true, 00:25:25.079 "claim_type": "exclusive_write", 00:25:25.079 "zoned": false, 00:25:25.079 "supported_io_types": { 00:25:25.079 "read": true, 00:25:25.079 "write": true, 00:25:25.079 "unmap": true, 00:25:25.079 "flush": true, 00:25:25.079 "reset": true, 00:25:25.079 "nvme_admin": false, 00:25:25.079 "nvme_io": false, 00:25:25.079 "nvme_io_md": false, 00:25:25.079 "write_zeroes": true, 00:25:25.079 "zcopy": true, 00:25:25.079 "get_zone_info": false, 00:25:25.079 "zone_management": false, 00:25:25.079 "zone_append": false, 00:25:25.079 "compare": false, 00:25:25.079 "compare_and_write": false, 00:25:25.079 "abort": true, 00:25:25.079 "seek_hole": false, 00:25:25.079 "seek_data": false, 00:25:25.079 "copy": true, 00:25:25.079 "nvme_iov_md": false 00:25:25.079 }, 00:25:25.079 "memory_domains": [ 00:25:25.079 { 00:25:25.079 "dma_device_id": "system", 00:25:25.079 "dma_device_type": 1 00:25:25.079 }, 00:25:25.079 { 00:25:25.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.079 "dma_device_type": 2 00:25:25.079 } 00:25:25.079 ], 00:25:25.079 "driver_specific": { 00:25:25.079 "passthru": { 00:25:25.079 "name": "pt2", 00:25:25.079 "base_bdev_name": "malloc2" 00:25:25.079 } 00:25:25.079 } 00:25:25.079 }' 00:25:25.079 12:07:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:25.079 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:25.337 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:25.337 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:25.337 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:25.337 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:25.337 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:25.337 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:25.595 [2024-07-25 12:07:11.498481] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.595 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' b8b9c045-1b59-4e78-b31c-a2821f788c3d '!=' b8b9c045-1b59-4e78-b31c-a2821f788c3d ']' 00:25:25.595 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:25:25.595 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:25.595 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:25:25.595 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:25.853 [2024-07-25 12:07:11.726880] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:25.853 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.854 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.854 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.854 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.854 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.854 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.112 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.112 "name": "raid_bdev1", 00:25:26.112 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:26.112 "strip_size_kb": 0, 00:25:26.112 "state": "online", 00:25:26.112 "raid_level": "raid1", 00:25:26.112 "superblock": true, 00:25:26.112 "num_base_bdevs": 2, 00:25:26.112 "num_base_bdevs_discovered": 1, 00:25:26.112 "num_base_bdevs_operational": 1, 00:25:26.112 "base_bdevs_list": [ 00:25:26.112 { 00:25:26.112 "name": null, 00:25:26.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.112 "is_configured": false, 00:25:26.112 "data_offset": 256, 00:25:26.112 "data_size": 7936 00:25:26.112 }, 00:25:26.112 { 00:25:26.112 "name": "pt2", 00:25:26.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:26.112 "is_configured": true, 00:25:26.112 "data_offset": 256, 00:25:26.112 "data_size": 7936 00:25:26.112 } 00:25:26.112 ] 00:25:26.112 }' 00:25:26.112 12:07:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.112 12:07:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:26.679 12:07:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:26.679 [2024-07-25 12:07:12.753624] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.679 [2024-07-25 12:07:12.753646] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.679 [2024-07-25 12:07:12.753691] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.679 [2024-07-25 12:07:12.753730] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.679 [2024-07-25 12:07:12.753740] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f21de0 name raid_bdev1, state offline 00:25:26.679 12:07:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.679 12:07:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:25:26.937 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:25:26.937 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:25:26.937 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:25:26.937 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:26.937 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:27.195 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:27.195 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:27.195 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:25:27.195 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:27.195 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:25:27.195 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.453 [2024-07-25 12:07:13.439401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.453 [2024-07-25 12:07:13.439442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.453 [2024-07-25 12:07:13.439457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f20f90 00:25:27.453 [2024-07-25 12:07:13.439469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.453 [2024-07-25 12:07:13.440944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.453 [2024-07-25 12:07:13.440969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.453 [2024-07-25 12:07:13.441027] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:27.453 [2024-07-25 12:07:13.441050] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.453 [2024-07-25 12:07:13.441127] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d81b40 00:25:27.453 [2024-07-25 12:07:13.441137] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:27.453 [2024-07-25 12:07:13.441298] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f2d810 00:25:27.453 [2024-07-25 12:07:13.441408] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d81b40 00:25:27.453 [2024-07-25 12:07:13.441417] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d81b40 00:25:27.453 [2024-07-25 12:07:13.441506] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.453 pt2 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.453 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.711 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.711 "name": "raid_bdev1", 00:25:27.711 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:27.711 "strip_size_kb": 0, 00:25:27.711 "state": "online", 00:25:27.711 "raid_level": "raid1", 00:25:27.711 "superblock": true, 00:25:27.711 "num_base_bdevs": 2, 00:25:27.711 "num_base_bdevs_discovered": 1, 00:25:27.711 "num_base_bdevs_operational": 1, 00:25:27.711 "base_bdevs_list": [ 00:25:27.711 { 00:25:27.711 "name": null, 00:25:27.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.711 "is_configured": false, 00:25:27.711 "data_offset": 256, 00:25:27.711 "data_size": 7936 00:25:27.711 }, 00:25:27.711 { 00:25:27.711 "name": "pt2", 00:25:27.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.711 "is_configured": true, 00:25:27.711 "data_offset": 256, 00:25:27.711 "data_size": 7936 00:25:27.711 } 00:25:27.711 ] 00:25:27.711 }' 00:25:27.711 12:07:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.711 12:07:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:28.277 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:28.536 [2024-07-25 12:07:14.470198] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.536 [2024-07-25 12:07:14.470221] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.536 [2024-07-25 12:07:14.470270] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.536 [2024-07-25 12:07:14.470308] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.536 [2024-07-25 12:07:14.470319] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d81b40 name raid_bdev1, state offline 00:25:28.536 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.536 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:25:28.794 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:25:28.794 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:25:28.794 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:25:28.794 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:29.053 [2024-07-25 12:07:14.919371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:29.053 [2024-07-25 12:07:14.919415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.053 [2024-07-25 12:07:14.919430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f238d0 00:25:29.053 [2024-07-25 12:07:14.919442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.053 [2024-07-25 12:07:14.920934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.053 [2024-07-25 12:07:14.920961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:29.053 [2024-07-25 12:07:14.921021] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:29.053 [2024-07-25 12:07:14.921046] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:29.053 [2024-07-25 12:07:14.921150] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:29.053 [2024-07-25 12:07:14.921163] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:29.053 [2024-07-25 12:07:14.921175] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d82690 name raid_bdev1, state configuring 00:25:29.053 [2024-07-25 12:07:14.921198] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:29.053 [2024-07-25 12:07:14.921250] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d811e0 00:25:29.053 [2024-07-25 12:07:14.921260] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:29.053 [2024-07-25 12:07:14.921411] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d89990 00:25:29.053 [2024-07-25 12:07:14.921521] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d811e0 00:25:29.053 [2024-07-25 12:07:14.921530] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d811e0 00:25:29.053 [2024-07-25 12:07:14.921617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.053 pt1 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.053 12:07:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.312 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.312 "name": "raid_bdev1", 00:25:29.312 "uuid": "b8b9c045-1b59-4e78-b31c-a2821f788c3d", 00:25:29.312 "strip_size_kb": 0, 00:25:29.312 "state": "online", 00:25:29.312 "raid_level": "raid1", 00:25:29.312 "superblock": true, 00:25:29.312 "num_base_bdevs": 2, 00:25:29.312 "num_base_bdevs_discovered": 1, 00:25:29.312 "num_base_bdevs_operational": 1, 00:25:29.312 "base_bdevs_list": [ 00:25:29.312 { 00:25:29.312 "name": null, 00:25:29.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.312 "is_configured": false, 00:25:29.312 "data_offset": 256, 00:25:29.312 "data_size": 7936 00:25:29.312 }, 00:25:29.312 { 00:25:29.312 "name": "pt2", 00:25:29.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:29.312 "is_configured": true, 00:25:29.312 "data_offset": 256, 00:25:29.312 "data_size": 7936 00:25:29.312 } 00:25:29.312 ] 00:25:29.312 }' 00:25:29.312 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.312 12:07:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:29.878 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:25:29.878 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:29.878 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:25:29.878 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:29.878 12:07:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:25:30.137 [2024-07-25 12:07:16.166852] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' b8b9c045-1b59-4e78-b31c-a2821f788c3d '!=' b8b9c045-1b59-4e78-b31c-a2821f788c3d ']' 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65904 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 65904 ']' 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 65904 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65904 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65904' 00:25:30.137 killing process with pid 65904 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 65904 00:25:30.137 [2024-07-25 12:07:16.244429] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.137 [2024-07-25 12:07:16.244476] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.137 [2024-07-25 12:07:16.244515] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:30.137 [2024-07-25 12:07:16.244525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d811e0 name raid_bdev1, state offline 00:25:30.137 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 65904 00:25:30.396 [2024-07-25 12:07:16.259970] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:30.396 12:07:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:25:30.396 00:25:30.396 real 0m14.792s 00:25:30.396 user 0m26.709s 00:25:30.396 sys 0m2.842s 00:25:30.396 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.396 12:07:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:30.396 ************************************ 00:25:30.396 END TEST raid_superblock_test_4k 00:25:30.396 ************************************ 00:25:30.396 12:07:16 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:25:30.396 12:07:16 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:25:30.396 12:07:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:30.396 12:07:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.396 12:07:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.655 ************************************ 00:25:30.655 START TEST raid_rebuild_test_sb_4k 00:25:30.655 ************************************ 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=68701 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 68701 /var/tmp/spdk-raid.sock 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 68701 ']' 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:30.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.655 12:07:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:30.655 [2024-07-25 12:07:16.606124] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:30.655 [2024-07-25 12:07:16.606201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68701 ] 00:25:30.655 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:30.655 Zero copy mechanism will not be used. 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:30.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.655 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:30.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.656 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:30.656 [2024-07-25 12:07:16.737847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.913 [2024-07-25 12:07:16.825504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.913 [2024-07-25 12:07:16.889861] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.913 [2024-07-25 12:07:16.889896] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.480 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.480 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:25:31.480 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:31.480 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:25:31.738 BaseBdev1_malloc 00:25:31.738 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:31.997 [2024-07-25 12:07:17.943880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:31.997 [2024-07-25 12:07:17.943924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.997 [2024-07-25 12:07:17.943945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14075f0 00:25:31.997 [2024-07-25 12:07:17.943956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.997 [2024-07-25 12:07:17.945468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.997 [2024-07-25 12:07:17.945494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.997 BaseBdev1 00:25:31.997 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:31.997 12:07:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:25:32.255 BaseBdev2_malloc 00:25:32.256 12:07:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:32.514 [2024-07-25 12:07:18.401448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:32.514 [2024-07-25 12:07:18.401488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.514 [2024-07-25 12:07:18.401506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15ab130 00:25:32.514 [2024-07-25 12:07:18.401518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.514 [2024-07-25 12:07:18.402929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.514 [2024-07-25 12:07:18.402956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:32.514 BaseBdev2 00:25:32.514 12:07:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:25:32.772 spare_malloc 00:25:32.772 12:07:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:32.772 spare_delay 00:25:32.772 12:07:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:33.031 [2024-07-25 12:07:19.087433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:33.031 [2024-07-25 12:07:19.087472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.031 [2024-07-25 12:07:19.087490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aa770 00:25:33.031 [2024-07-25 12:07:19.087501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.031 [2024-07-25 12:07:19.088888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.031 [2024-07-25 12:07:19.088914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:33.031 spare 00:25:33.031 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:33.288 [2024-07-25 12:07:19.304038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:33.288 [2024-07-25 12:07:19.305166] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:33.288 [2024-07-25 12:07:19.305311] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x13ff270 00:25:33.288 [2024-07-25 12:07:19.305324] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:33.288 [2024-07-25 12:07:19.305489] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ab3c0 00:25:33.288 [2024-07-25 12:07:19.305617] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13ff270 00:25:33.288 [2024-07-25 12:07:19.305626] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13ff270 00:25:33.288 [2024-07-25 12:07:19.305712] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.288 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.547 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.547 "name": "raid_bdev1", 00:25:33.547 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:33.547 "strip_size_kb": 0, 00:25:33.547 "state": "online", 00:25:33.547 "raid_level": "raid1", 00:25:33.547 "superblock": true, 00:25:33.547 "num_base_bdevs": 2, 00:25:33.547 "num_base_bdevs_discovered": 2, 00:25:33.547 "num_base_bdevs_operational": 2, 00:25:33.547 "base_bdevs_list": [ 00:25:33.547 { 00:25:33.547 "name": "BaseBdev1", 00:25:33.547 "uuid": "0f511755-6561-54b6-aae8-8f57677a4a2c", 00:25:33.547 "is_configured": true, 00:25:33.547 "data_offset": 256, 00:25:33.547 "data_size": 7936 00:25:33.547 }, 00:25:33.547 { 00:25:33.547 "name": "BaseBdev2", 00:25:33.547 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:33.547 "is_configured": true, 00:25:33.547 "data_offset": 256, 00:25:33.547 "data_size": 7936 00:25:33.547 } 00:25:33.547 ] 00:25:33.547 }' 00:25:33.547 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.547 12:07:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:34.113 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:34.113 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:34.371 [2024-07-25 12:07:20.346981] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.372 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:25:34.372 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.372 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:34.630 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:34.898 [2024-07-25 12:07:20.808016] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ab3c0 00:25:34.898 /dev/nbd0 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:34.898 1+0 records in 00:25:34.898 1+0 records out 00:25:34.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221542 s, 18.5 MB/s 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:34.898 12:07:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:35.466 7936+0 records in 00:25:35.466 7936+0 records out 00:25:35.466 32505856 bytes (33 MB, 31 MiB) copied, 0.681783 s, 47.7 MB/s 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:35.466 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:35.759 [2024-07-25 12:07:21.792758] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:35.759 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:36.017 [2024-07-25 12:07:21.953223] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.017 12:07:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.276 12:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.276 "name": "raid_bdev1", 00:25:36.276 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:36.276 "strip_size_kb": 0, 00:25:36.276 "state": "online", 00:25:36.276 "raid_level": "raid1", 00:25:36.276 "superblock": true, 00:25:36.276 "num_base_bdevs": 2, 00:25:36.276 "num_base_bdevs_discovered": 1, 00:25:36.276 "num_base_bdevs_operational": 1, 00:25:36.276 "base_bdevs_list": [ 00:25:36.276 { 00:25:36.276 "name": null, 00:25:36.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.276 "is_configured": false, 00:25:36.276 "data_offset": 256, 00:25:36.276 "data_size": 7936 00:25:36.276 }, 00:25:36.276 { 00:25:36.276 "name": "BaseBdev2", 00:25:36.276 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:36.276 "is_configured": true, 00:25:36.276 "data_offset": 256, 00:25:36.276 "data_size": 7936 00:25:36.276 } 00:25:36.276 ] 00:25:36.276 }' 00:25:36.276 12:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.276 12:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:36.844 12:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:36.844 [2024-07-25 12:07:22.939828] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:36.844 [2024-07-25 12:07:22.944585] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ab3c0 00:25:36.844 [2024-07-25 12:07:22.946631] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:36.844 12:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.220 12:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.220 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.220 "name": "raid_bdev1", 00:25:38.220 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:38.220 "strip_size_kb": 0, 00:25:38.220 "state": "online", 00:25:38.220 "raid_level": "raid1", 00:25:38.220 "superblock": true, 00:25:38.220 "num_base_bdevs": 2, 00:25:38.220 "num_base_bdevs_discovered": 2, 00:25:38.220 "num_base_bdevs_operational": 2, 00:25:38.220 "process": { 00:25:38.220 "type": "rebuild", 00:25:38.220 "target": "spare", 00:25:38.220 "progress": { 00:25:38.220 "blocks": 3072, 00:25:38.220 "percent": 38 00:25:38.220 } 00:25:38.220 }, 00:25:38.220 "base_bdevs_list": [ 00:25:38.220 { 00:25:38.220 "name": "spare", 00:25:38.220 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:38.220 "is_configured": true, 00:25:38.220 "data_offset": 256, 00:25:38.220 "data_size": 7936 00:25:38.221 }, 00:25:38.221 { 00:25:38.221 "name": "BaseBdev2", 00:25:38.221 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:38.221 "is_configured": true, 00:25:38.221 "data_offset": 256, 00:25:38.221 "data_size": 7936 00:25:38.221 } 00:25:38.221 ] 00:25:38.221 }' 00:25:38.221 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.221 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.221 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.221 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.221 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:38.479 [2024-07-25 12:07:24.472835] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:38.479 [2024-07-25 12:07:24.558326] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:38.479 [2024-07-25 12:07:24.558370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.479 [2024-07-25 12:07:24.558384] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:38.479 [2024-07-25 12:07:24.558392] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.479 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.738 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.738 "name": "raid_bdev1", 00:25:38.738 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:38.738 "strip_size_kb": 0, 00:25:38.738 "state": "online", 00:25:38.738 "raid_level": "raid1", 00:25:38.738 "superblock": true, 00:25:38.738 "num_base_bdevs": 2, 00:25:38.738 "num_base_bdevs_discovered": 1, 00:25:38.738 "num_base_bdevs_operational": 1, 00:25:38.738 "base_bdevs_list": [ 00:25:38.738 { 00:25:38.738 "name": null, 00:25:38.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.738 "is_configured": false, 00:25:38.738 "data_offset": 256, 00:25:38.738 "data_size": 7936 00:25:38.738 }, 00:25:38.738 { 00:25:38.738 "name": "BaseBdev2", 00:25:38.738 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:38.738 "is_configured": true, 00:25:38.738 "data_offset": 256, 00:25:38.738 "data_size": 7936 00:25:38.738 } 00:25:38.738 ] 00:25:38.738 }' 00:25:38.738 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.738 12:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.305 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.563 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.563 "name": "raid_bdev1", 00:25:39.563 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:39.563 "strip_size_kb": 0, 00:25:39.563 "state": "online", 00:25:39.563 "raid_level": "raid1", 00:25:39.563 "superblock": true, 00:25:39.563 "num_base_bdevs": 2, 00:25:39.563 "num_base_bdevs_discovered": 1, 00:25:39.563 "num_base_bdevs_operational": 1, 00:25:39.563 "base_bdevs_list": [ 00:25:39.563 { 00:25:39.563 "name": null, 00:25:39.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.563 "is_configured": false, 00:25:39.563 "data_offset": 256, 00:25:39.563 "data_size": 7936 00:25:39.563 }, 00:25:39.563 { 00:25:39.563 "name": "BaseBdev2", 00:25:39.563 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:39.563 "is_configured": true, 00:25:39.563 "data_offset": 256, 00:25:39.563 "data_size": 7936 00:25:39.563 } 00:25:39.563 ] 00:25:39.563 }' 00:25:39.563 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.563 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:39.563 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.563 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:39.563 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:39.822 [2024-07-25 12:07:25.849965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:39.822 [2024-07-25 12:07:25.854657] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x159f8f0 00:25:39.822 [2024-07-25 12:07:25.856015] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:39.822 12:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:40.755 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.755 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:40.755 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:40.755 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:40.755 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.014 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.014 12:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.014 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.014 "name": "raid_bdev1", 00:25:41.014 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:41.014 "strip_size_kb": 0, 00:25:41.014 "state": "online", 00:25:41.014 "raid_level": "raid1", 00:25:41.014 "superblock": true, 00:25:41.014 "num_base_bdevs": 2, 00:25:41.014 "num_base_bdevs_discovered": 2, 00:25:41.014 "num_base_bdevs_operational": 2, 00:25:41.014 "process": { 00:25:41.014 "type": "rebuild", 00:25:41.014 "target": "spare", 00:25:41.014 "progress": { 00:25:41.014 "blocks": 3072, 00:25:41.014 "percent": 38 00:25:41.014 } 00:25:41.014 }, 00:25:41.014 "base_bdevs_list": [ 00:25:41.014 { 00:25:41.014 "name": "spare", 00:25:41.014 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:41.014 "is_configured": true, 00:25:41.014 "data_offset": 256, 00:25:41.014 "data_size": 7936 00:25:41.014 }, 00:25:41.014 { 00:25:41.014 "name": "BaseBdev2", 00:25:41.014 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:41.014 "is_configured": true, 00:25:41.014 "data_offset": 256, 00:25:41.014 "data_size": 7936 00:25:41.014 } 00:25:41.014 ] 00:25:41.014 }' 00:25:41.014 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:25:41.273 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=952 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.273 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.531 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.531 "name": "raid_bdev1", 00:25:41.531 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:41.531 "strip_size_kb": 0, 00:25:41.531 "state": "online", 00:25:41.531 "raid_level": "raid1", 00:25:41.531 "superblock": true, 00:25:41.531 "num_base_bdevs": 2, 00:25:41.531 "num_base_bdevs_discovered": 2, 00:25:41.531 "num_base_bdevs_operational": 2, 00:25:41.531 "process": { 00:25:41.531 "type": "rebuild", 00:25:41.531 "target": "spare", 00:25:41.531 "progress": { 00:25:41.531 "blocks": 3840, 00:25:41.531 "percent": 48 00:25:41.531 } 00:25:41.531 }, 00:25:41.531 "base_bdevs_list": [ 00:25:41.531 { 00:25:41.531 "name": "spare", 00:25:41.531 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:41.531 "is_configured": true, 00:25:41.531 "data_offset": 256, 00:25:41.531 "data_size": 7936 00:25:41.531 }, 00:25:41.532 { 00:25:41.532 "name": "BaseBdev2", 00:25:41.532 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:41.532 "is_configured": true, 00:25:41.532 "data_offset": 256, 00:25:41.532 "data_size": 7936 00:25:41.532 } 00:25:41.532 ] 00:25:41.532 }' 00:25:41.532 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.532 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.532 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.532 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.532 12:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.466 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.725 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.725 "name": "raid_bdev1", 00:25:42.725 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:42.725 "strip_size_kb": 0, 00:25:42.725 "state": "online", 00:25:42.725 "raid_level": "raid1", 00:25:42.725 "superblock": true, 00:25:42.725 "num_base_bdevs": 2, 00:25:42.725 "num_base_bdevs_discovered": 2, 00:25:42.725 "num_base_bdevs_operational": 2, 00:25:42.725 "process": { 00:25:42.725 "type": "rebuild", 00:25:42.725 "target": "spare", 00:25:42.725 "progress": { 00:25:42.725 "blocks": 7168, 00:25:42.725 "percent": 90 00:25:42.725 } 00:25:42.725 }, 00:25:42.725 "base_bdevs_list": [ 00:25:42.725 { 00:25:42.725 "name": "spare", 00:25:42.725 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:42.725 "is_configured": true, 00:25:42.725 "data_offset": 256, 00:25:42.725 "data_size": 7936 00:25:42.725 }, 00:25:42.725 { 00:25:42.725 "name": "BaseBdev2", 00:25:42.725 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:42.725 "is_configured": true, 00:25:42.725 "data_offset": 256, 00:25:42.725 "data_size": 7936 00:25:42.725 } 00:25:42.725 ] 00:25:42.725 }' 00:25:42.725 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:42.725 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.725 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:42.725 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.725 12:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:42.983 [2024-07-25 12:07:28.978598] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:42.983 [2024-07-25 12:07:28.978650] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:42.983 [2024-07-25 12:07:28.978726] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.919 12:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.177 "name": "raid_bdev1", 00:25:44.177 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:44.177 "strip_size_kb": 0, 00:25:44.177 "state": "online", 00:25:44.177 "raid_level": "raid1", 00:25:44.177 "superblock": true, 00:25:44.177 "num_base_bdevs": 2, 00:25:44.177 "num_base_bdevs_discovered": 2, 00:25:44.177 "num_base_bdevs_operational": 2, 00:25:44.177 "base_bdevs_list": [ 00:25:44.177 { 00:25:44.177 "name": "spare", 00:25:44.177 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:44.177 "is_configured": true, 00:25:44.177 "data_offset": 256, 00:25:44.177 "data_size": 7936 00:25:44.177 }, 00:25:44.177 { 00:25:44.177 "name": "BaseBdev2", 00:25:44.177 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:44.177 "is_configured": true, 00:25:44.177 "data_offset": 256, 00:25:44.177 "data_size": 7936 00:25:44.177 } 00:25:44.177 ] 00:25:44.177 }' 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.177 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.436 "name": "raid_bdev1", 00:25:44.436 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:44.436 "strip_size_kb": 0, 00:25:44.436 "state": "online", 00:25:44.436 "raid_level": "raid1", 00:25:44.436 "superblock": true, 00:25:44.436 "num_base_bdevs": 2, 00:25:44.436 "num_base_bdevs_discovered": 2, 00:25:44.436 "num_base_bdevs_operational": 2, 00:25:44.436 "base_bdevs_list": [ 00:25:44.436 { 00:25:44.436 "name": "spare", 00:25:44.436 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:44.436 "is_configured": true, 00:25:44.436 "data_offset": 256, 00:25:44.436 "data_size": 7936 00:25:44.436 }, 00:25:44.436 { 00:25:44.436 "name": "BaseBdev2", 00:25:44.436 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:44.436 "is_configured": true, 00:25:44.436 "data_offset": 256, 00:25:44.436 "data_size": 7936 00:25:44.436 } 00:25:44.436 ] 00:25:44.436 }' 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.436 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.694 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.694 "name": "raid_bdev1", 00:25:44.694 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:44.694 "strip_size_kb": 0, 00:25:44.694 "state": "online", 00:25:44.694 "raid_level": "raid1", 00:25:44.694 "superblock": true, 00:25:44.694 "num_base_bdevs": 2, 00:25:44.694 "num_base_bdevs_discovered": 2, 00:25:44.694 "num_base_bdevs_operational": 2, 00:25:44.694 "base_bdevs_list": [ 00:25:44.694 { 00:25:44.694 "name": "spare", 00:25:44.694 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:44.694 "is_configured": true, 00:25:44.694 "data_offset": 256, 00:25:44.694 "data_size": 7936 00:25:44.694 }, 00:25:44.694 { 00:25:44.694 "name": "BaseBdev2", 00:25:44.694 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:44.694 "is_configured": true, 00:25:44.694 "data_offset": 256, 00:25:44.694 "data_size": 7936 00:25:44.694 } 00:25:44.694 ] 00:25:44.694 }' 00:25:44.694 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.694 12:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:45.261 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:45.519 [2024-07-25 12:07:31.501437] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:45.519 [2024-07-25 12:07:31.501463] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:45.519 [2024-07-25 12:07:31.501512] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:45.519 [2024-07-25 12:07:31.501562] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:45.519 [2024-07-25 12:07:31.501573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13ff270 name raid_bdev1, state offline 00:25:45.519 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.519 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:45.778 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:46.037 /dev/nbd0 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:46.037 12:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.038 1+0 records in 00:25:46.038 1+0 records out 00:25:46.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016301 s, 25.1 MB/s 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:46.038 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:46.311 /dev/nbd1 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.311 1+0 records in 00:25:46.311 1+0 records out 00:25:46.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290755 s, 14.1 MB/s 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:46.311 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:46.570 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:25:46.829 12:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:47.088 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:47.347 [2024-07-25 12:07:33.226524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:47.347 [2024-07-25 12:07:33.226561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.347 [2024-07-25 12:07:33.226579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13ff4f0 00:25:47.347 [2024-07-25 12:07:33.226590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.347 [2024-07-25 12:07:33.228105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.347 [2024-07-25 12:07:33.228132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:47.347 [2024-07-25 12:07:33.228211] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:47.347 [2024-07-25 12:07:33.228236] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.347 [2024-07-25 12:07:33.228330] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:47.347 spare 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.347 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.347 [2024-07-25 12:07:33.328639] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14008b0 00:25:47.347 [2024-07-25 12:07:33.328653] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:47.347 [2024-07-25 12:07:33.328816] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x159f8f0 00:25:47.347 [2024-07-25 12:07:33.328946] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14008b0 00:25:47.347 [2024-07-25 12:07:33.328955] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14008b0 00:25:47.347 [2024-07-25 12:07:33.329051] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.606 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.606 "name": "raid_bdev1", 00:25:47.606 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:47.606 "strip_size_kb": 0, 00:25:47.606 "state": "online", 00:25:47.606 "raid_level": "raid1", 00:25:47.606 "superblock": true, 00:25:47.606 "num_base_bdevs": 2, 00:25:47.606 "num_base_bdevs_discovered": 2, 00:25:47.606 "num_base_bdevs_operational": 2, 00:25:47.606 "base_bdevs_list": [ 00:25:47.606 { 00:25:47.606 "name": "spare", 00:25:47.606 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:47.606 "is_configured": true, 00:25:47.606 "data_offset": 256, 00:25:47.606 "data_size": 7936 00:25:47.606 }, 00:25:47.606 { 00:25:47.606 "name": "BaseBdev2", 00:25:47.606 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:47.606 "is_configured": true, 00:25:47.606 "data_offset": 256, 00:25:47.606 "data_size": 7936 00:25:47.606 } 00:25:47.606 ] 00:25:47.606 }' 00:25:47.606 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.606 12:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.173 "name": "raid_bdev1", 00:25:48.173 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:48.173 "strip_size_kb": 0, 00:25:48.173 "state": "online", 00:25:48.173 "raid_level": "raid1", 00:25:48.173 "superblock": true, 00:25:48.173 "num_base_bdevs": 2, 00:25:48.173 "num_base_bdevs_discovered": 2, 00:25:48.173 "num_base_bdevs_operational": 2, 00:25:48.173 "base_bdevs_list": [ 00:25:48.173 { 00:25:48.173 "name": "spare", 00:25:48.173 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:48.173 "is_configured": true, 00:25:48.173 "data_offset": 256, 00:25:48.173 "data_size": 7936 00:25:48.173 }, 00:25:48.173 { 00:25:48.173 "name": "BaseBdev2", 00:25:48.173 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:48.173 "is_configured": true, 00:25:48.173 "data_offset": 256, 00:25:48.173 "data_size": 7936 00:25:48.173 } 00:25:48.173 ] 00:25:48.173 }' 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:48.173 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:48.431 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:48.431 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.431 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:48.431 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.431 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:48.690 [2024-07-25 12:07:34.742620] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.690 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.948 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.948 "name": "raid_bdev1", 00:25:48.948 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:48.948 "strip_size_kb": 0, 00:25:48.948 "state": "online", 00:25:48.948 "raid_level": "raid1", 00:25:48.948 "superblock": true, 00:25:48.948 "num_base_bdevs": 2, 00:25:48.948 "num_base_bdevs_discovered": 1, 00:25:48.948 "num_base_bdevs_operational": 1, 00:25:48.948 "base_bdevs_list": [ 00:25:48.948 { 00:25:48.948 "name": null, 00:25:48.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.948 "is_configured": false, 00:25:48.948 "data_offset": 256, 00:25:48.948 "data_size": 7936 00:25:48.948 }, 00:25:48.948 { 00:25:48.948 "name": "BaseBdev2", 00:25:48.948 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:48.948 "is_configured": true, 00:25:48.948 "data_offset": 256, 00:25:48.948 "data_size": 7936 00:25:48.948 } 00:25:48.948 ] 00:25:48.948 }' 00:25:48.948 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.948 12:07:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:49.514 12:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:49.773 [2024-07-25 12:07:35.705165] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.773 [2024-07-25 12:07:35.705290] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:49.773 [2024-07-25 12:07:35.705306] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:49.773 [2024-07-25 12:07:35.705331] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.773 [2024-07-25 12:07:35.709895] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x159f8f0 00:25:49.773 [2024-07-25 12:07:35.712013] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:49.773 12:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.758 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.017 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:51.017 "name": "raid_bdev1", 00:25:51.017 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:51.017 "strip_size_kb": 0, 00:25:51.017 "state": "online", 00:25:51.017 "raid_level": "raid1", 00:25:51.017 "superblock": true, 00:25:51.017 "num_base_bdevs": 2, 00:25:51.017 "num_base_bdevs_discovered": 2, 00:25:51.017 "num_base_bdevs_operational": 2, 00:25:51.017 "process": { 00:25:51.017 "type": "rebuild", 00:25:51.017 "target": "spare", 00:25:51.017 "progress": { 00:25:51.017 "blocks": 3072, 00:25:51.017 "percent": 38 00:25:51.017 } 00:25:51.017 }, 00:25:51.017 "base_bdevs_list": [ 00:25:51.017 { 00:25:51.017 "name": "spare", 00:25:51.017 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:51.017 "is_configured": true, 00:25:51.017 "data_offset": 256, 00:25:51.017 "data_size": 7936 00:25:51.017 }, 00:25:51.017 { 00:25:51.017 "name": "BaseBdev2", 00:25:51.017 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:51.017 "is_configured": true, 00:25:51.017 "data_offset": 256, 00:25:51.017 "data_size": 7936 00:25:51.017 } 00:25:51.017 ] 00:25:51.017 }' 00:25:51.017 12:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:51.017 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.017 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:51.017 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.017 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:51.276 [2024-07-25 12:07:37.262181] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.276 [2024-07-25 12:07:37.323574] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:51.276 [2024-07-25 12:07:37.323617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.276 [2024-07-25 12:07:37.323631] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.276 [2024-07-25 12:07:37.323639] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.276 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.534 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.534 "name": "raid_bdev1", 00:25:51.534 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:51.534 "strip_size_kb": 0, 00:25:51.534 "state": "online", 00:25:51.534 "raid_level": "raid1", 00:25:51.534 "superblock": true, 00:25:51.534 "num_base_bdevs": 2, 00:25:51.534 "num_base_bdevs_discovered": 1, 00:25:51.534 "num_base_bdevs_operational": 1, 00:25:51.534 "base_bdevs_list": [ 00:25:51.534 { 00:25:51.534 "name": null, 00:25:51.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.534 "is_configured": false, 00:25:51.534 "data_offset": 256, 00:25:51.534 "data_size": 7936 00:25:51.534 }, 00:25:51.534 { 00:25:51.534 "name": "BaseBdev2", 00:25:51.534 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:51.534 "is_configured": true, 00:25:51.534 "data_offset": 256, 00:25:51.534 "data_size": 7936 00:25:51.534 } 00:25:51.534 ] 00:25:51.534 }' 00:25:51.534 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.534 12:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:52.102 12:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:52.360 [2024-07-25 12:07:38.358509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:52.360 [2024-07-25 12:07:38.358551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.360 [2024-07-25 12:07:38.358570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1400cf0 00:25:52.360 [2024-07-25 12:07:38.358587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.360 [2024-07-25 12:07:38.358923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.360 [2024-07-25 12:07:38.358939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:52.360 [2024-07-25 12:07:38.359010] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:52.360 [2024-07-25 12:07:38.359021] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:52.360 [2024-07-25 12:07:38.359032] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:52.360 [2024-07-25 12:07:38.359049] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:52.360 [2024-07-25 12:07:38.363643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13ffda0 00:25:52.360 spare 00:25:52.360 [2024-07-25 12:07:38.364999] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:52.360 12:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.295 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.554 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.554 "name": "raid_bdev1", 00:25:53.554 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:53.554 "strip_size_kb": 0, 00:25:53.554 "state": "online", 00:25:53.554 "raid_level": "raid1", 00:25:53.554 "superblock": true, 00:25:53.554 "num_base_bdevs": 2, 00:25:53.554 "num_base_bdevs_discovered": 2, 00:25:53.554 "num_base_bdevs_operational": 2, 00:25:53.554 "process": { 00:25:53.554 "type": "rebuild", 00:25:53.554 "target": "spare", 00:25:53.554 "progress": { 00:25:53.554 "blocks": 2816, 00:25:53.554 "percent": 35 00:25:53.554 } 00:25:53.554 }, 00:25:53.554 "base_bdevs_list": [ 00:25:53.554 { 00:25:53.554 "name": "spare", 00:25:53.554 "uuid": "53640f12-ffd8-56c0-9dfa-13b80195251e", 00:25:53.554 "is_configured": true, 00:25:53.554 "data_offset": 256, 00:25:53.554 "data_size": 7936 00:25:53.554 }, 00:25:53.554 { 00:25:53.554 "name": "BaseBdev2", 00:25:53.554 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:53.554 "is_configured": true, 00:25:53.554 "data_offset": 256, 00:25:53.554 "data_size": 7936 00:25:53.554 } 00:25:53.554 ] 00:25:53.554 }' 00:25:53.554 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.554 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.554 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.554 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.554 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:53.813 [2024-07-25 12:07:39.860177] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:53.813 [2024-07-25 12:07:39.876179] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:53.813 [2024-07-25 12:07:39.876220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.813 [2024-07-25 12:07:39.876234] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:53.813 [2024-07-25 12:07:39.876242] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.813 12:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.071 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.071 "name": "raid_bdev1", 00:25:54.071 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:54.071 "strip_size_kb": 0, 00:25:54.071 "state": "online", 00:25:54.071 "raid_level": "raid1", 00:25:54.071 "superblock": true, 00:25:54.071 "num_base_bdevs": 2, 00:25:54.071 "num_base_bdevs_discovered": 1, 00:25:54.071 "num_base_bdevs_operational": 1, 00:25:54.071 "base_bdevs_list": [ 00:25:54.071 { 00:25:54.071 "name": null, 00:25:54.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.071 "is_configured": false, 00:25:54.071 "data_offset": 256, 00:25:54.071 "data_size": 7936 00:25:54.071 }, 00:25:54.071 { 00:25:54.072 "name": "BaseBdev2", 00:25:54.072 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:54.072 "is_configured": true, 00:25:54.072 "data_offset": 256, 00:25:54.072 "data_size": 7936 00:25:54.072 } 00:25:54.072 ] 00:25:54.072 }' 00:25:54.072 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.072 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.639 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.897 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:54.897 "name": "raid_bdev1", 00:25:54.897 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:54.897 "strip_size_kb": 0, 00:25:54.897 "state": "online", 00:25:54.897 "raid_level": "raid1", 00:25:54.897 "superblock": true, 00:25:54.897 "num_base_bdevs": 2, 00:25:54.897 "num_base_bdevs_discovered": 1, 00:25:54.897 "num_base_bdevs_operational": 1, 00:25:54.898 "base_bdevs_list": [ 00:25:54.898 { 00:25:54.898 "name": null, 00:25:54.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.898 "is_configured": false, 00:25:54.898 "data_offset": 256, 00:25:54.898 "data_size": 7936 00:25:54.898 }, 00:25:54.898 { 00:25:54.898 "name": "BaseBdev2", 00:25:54.898 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:54.898 "is_configured": true, 00:25:54.898 "data_offset": 256, 00:25:54.898 "data_size": 7936 00:25:54.898 } 00:25:54.898 ] 00:25:54.898 }' 00:25:54.898 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:54.898 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:54.898 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:54.898 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:54.898 12:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:55.156 12:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:55.414 [2024-07-25 12:07:41.388346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:55.414 [2024-07-25 12:07:41.388389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.414 [2024-07-25 12:07:41.388407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x149e6e0 00:25:55.414 [2024-07-25 12:07:41.388418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.414 [2024-07-25 12:07:41.388724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.414 [2024-07-25 12:07:41.388739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:55.414 [2024-07-25 12:07:41.388795] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:55.414 [2024-07-25 12:07:41.388806] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:55.414 [2024-07-25 12:07:41.388815] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:55.414 BaseBdev1 00:25:55.414 12:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.348 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.607 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.607 "name": "raid_bdev1", 00:25:56.607 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:56.607 "strip_size_kb": 0, 00:25:56.607 "state": "online", 00:25:56.607 "raid_level": "raid1", 00:25:56.607 "superblock": true, 00:25:56.607 "num_base_bdevs": 2, 00:25:56.607 "num_base_bdevs_discovered": 1, 00:25:56.607 "num_base_bdevs_operational": 1, 00:25:56.607 "base_bdevs_list": [ 00:25:56.607 { 00:25:56.607 "name": null, 00:25:56.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.607 "is_configured": false, 00:25:56.607 "data_offset": 256, 00:25:56.607 "data_size": 7936 00:25:56.607 }, 00:25:56.607 { 00:25:56.607 "name": "BaseBdev2", 00:25:56.607 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:56.607 "is_configured": true, 00:25:56.607 "data_offset": 256, 00:25:56.607 "data_size": 7936 00:25:56.607 } 00:25:56.607 ] 00:25:56.607 }' 00:25:56.607 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.607 12:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.174 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:57.434 "name": "raid_bdev1", 00:25:57.434 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:57.434 "strip_size_kb": 0, 00:25:57.434 "state": "online", 00:25:57.434 "raid_level": "raid1", 00:25:57.434 "superblock": true, 00:25:57.434 "num_base_bdevs": 2, 00:25:57.434 "num_base_bdevs_discovered": 1, 00:25:57.434 "num_base_bdevs_operational": 1, 00:25:57.434 "base_bdevs_list": [ 00:25:57.434 { 00:25:57.434 "name": null, 00:25:57.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.434 "is_configured": false, 00:25:57.434 "data_offset": 256, 00:25:57.434 "data_size": 7936 00:25:57.434 }, 00:25:57.434 { 00:25:57.434 "name": "BaseBdev2", 00:25:57.434 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:57.434 "is_configured": true, 00:25:57.434 "data_offset": 256, 00:25:57.434 "data_size": 7936 00:25:57.434 } 00:25:57.434 ] 00:25:57.434 }' 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:57.434 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:57.693 [2024-07-25 12:07:43.754591] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:57.693 [2024-07-25 12:07:43.754698] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:57.693 [2024-07-25 12:07:43.754712] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:57.693 request: 00:25:57.693 { 00:25:57.693 "base_bdev": "BaseBdev1", 00:25:57.693 "raid_bdev": "raid_bdev1", 00:25:57.693 "method": "bdev_raid_add_base_bdev", 00:25:57.693 "req_id": 1 00:25:57.693 } 00:25:57.693 Got JSON-RPC error response 00:25:57.693 response: 00:25:57.693 { 00:25:57.693 "code": -22, 00:25:57.693 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:57.693 } 00:25:57.693 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:25:57.693 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:57.693 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:57.693 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:57.693 12:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.068 12:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.068 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.068 "name": "raid_bdev1", 00:25:59.068 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:59.068 "strip_size_kb": 0, 00:25:59.068 "state": "online", 00:25:59.068 "raid_level": "raid1", 00:25:59.068 "superblock": true, 00:25:59.068 "num_base_bdevs": 2, 00:25:59.068 "num_base_bdevs_discovered": 1, 00:25:59.068 "num_base_bdevs_operational": 1, 00:25:59.068 "base_bdevs_list": [ 00:25:59.068 { 00:25:59.068 "name": null, 00:25:59.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.068 "is_configured": false, 00:25:59.068 "data_offset": 256, 00:25:59.068 "data_size": 7936 00:25:59.068 }, 00:25:59.068 { 00:25:59.068 "name": "BaseBdev2", 00:25:59.068 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:59.068 "is_configured": true, 00:25:59.068 "data_offset": 256, 00:25:59.068 "data_size": 7936 00:25:59.068 } 00:25:59.068 ] 00:25:59.068 }' 00:25:59.068 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.068 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.634 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:59.892 "name": "raid_bdev1", 00:25:59.892 "uuid": "84744334-f7b4-49b1-baea-a63ac5625d15", 00:25:59.892 "strip_size_kb": 0, 00:25:59.892 "state": "online", 00:25:59.892 "raid_level": "raid1", 00:25:59.892 "superblock": true, 00:25:59.892 "num_base_bdevs": 2, 00:25:59.892 "num_base_bdevs_discovered": 1, 00:25:59.892 "num_base_bdevs_operational": 1, 00:25:59.892 "base_bdevs_list": [ 00:25:59.892 { 00:25:59.892 "name": null, 00:25:59.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.892 "is_configured": false, 00:25:59.892 "data_offset": 256, 00:25:59.892 "data_size": 7936 00:25:59.892 }, 00:25:59.892 { 00:25:59.892 "name": "BaseBdev2", 00:25:59.892 "uuid": "92eba932-eccc-541e-be70-07443292c5be", 00:25:59.892 "is_configured": true, 00:25:59.892 "data_offset": 256, 00:25:59.892 "data_size": 7936 00:25:59.892 } 00:25:59.892 ] 00:25:59.892 }' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 68701 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 68701 ']' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 68701 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68701 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68701' 00:25:59.892 killing process with pid 68701 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 68701 00:25:59.892 Received shutdown signal, test time was about 60.000000 seconds 00:25:59.892 00:25:59.892 Latency(us) 00:25:59.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.892 =================================================================================================================== 00:25:59.892 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:59.892 [2024-07-25 12:07:45.948018] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.892 [2024-07-25 12:07:45.948094] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.892 [2024-07-25 12:07:45.948133] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.892 [2024-07-25 12:07:45.948150] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14008b0 name raid_bdev1, state offline 00:25:59.892 12:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 68701 00:25:59.892 [2024-07-25 12:07:45.973193] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:00.151 12:07:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:26:00.151 00:26:00.151 real 0m29.623s 00:26:00.151 user 0m45.770s 00:26:00.151 sys 0m4.833s 00:26:00.151 12:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:00.151 12:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:00.151 ************************************ 00:26:00.151 END TEST raid_rebuild_test_sb_4k 00:26:00.151 ************************************ 00:26:00.151 12:07:46 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:26:00.151 12:07:46 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:26:00.151 12:07:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:00.151 12:07:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.151 12:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:00.151 ************************************ 00:26:00.151 START TEST raid_state_function_test_sb_md_separate 00:26:00.151 ************************************ 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:00.151 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=74100 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 74100' 00:26:00.152 Process raid pid: 74100 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 74100 /var/tmp/spdk-raid.sock 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 74100 ']' 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:00.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.152 12:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:00.411 [2024-07-25 12:07:46.320759] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:00.411 [2024-07-25 12:07:46.320818] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.411 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.411 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:00.411 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.411 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:00.411 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.411 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:00.411 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.411 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:00.411 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.411 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:00.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:00.412 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:00.412 [2024-07-25 12:07:46.452529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.671 [2024-07-25 12:07:46.536575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.671 [2024-07-25 12:07:46.596666] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:00.671 [2024-07-25 12:07:46.596701] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.239 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.239 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:26:01.239 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:01.498 [2024-07-25 12:07:47.424422] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:01.498 [2024-07-25 12:07:47.424460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:01.498 [2024-07-25 12:07:47.424470] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:01.498 [2024-07-25 12:07:47.424481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.498 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.758 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.758 "name": "Existed_Raid", 00:26:01.758 "uuid": "48ce7718-47d9-40b4-9351-dd6ea537fb5e", 00:26:01.758 "strip_size_kb": 0, 00:26:01.758 "state": "configuring", 00:26:01.758 "raid_level": "raid1", 00:26:01.758 "superblock": true, 00:26:01.758 "num_base_bdevs": 2, 00:26:01.758 "num_base_bdevs_discovered": 0, 00:26:01.758 "num_base_bdevs_operational": 2, 00:26:01.758 "base_bdevs_list": [ 00:26:01.758 { 00:26:01.758 "name": "BaseBdev1", 00:26:01.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.758 "is_configured": false, 00:26:01.758 "data_offset": 0, 00:26:01.758 "data_size": 0 00:26:01.758 }, 00:26:01.758 { 00:26:01.758 "name": "BaseBdev2", 00:26:01.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.758 "is_configured": false, 00:26:01.758 "data_offset": 0, 00:26:01.758 "data_size": 0 00:26:01.758 } 00:26:01.758 ] 00:26:01.758 }' 00:26:01.758 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.758 12:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:02.325 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:02.325 [2024-07-25 12:07:48.434951] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:02.325 [2024-07-25 12:07:48.434980] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16f5f20 name Existed_Raid, state configuring 00:26:02.583 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:02.583 [2024-07-25 12:07:48.607414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.583 [2024-07-25 12:07:48.607440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.583 [2024-07-25 12:07:48.607449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.583 [2024-07-25 12:07:48.607459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.583 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:26:02.841 [2024-07-25 12:07:48.781870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:02.841 BaseBdev1 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:02.841 12:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:03.101 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:03.362 [ 00:26:03.362 { 00:26:03.362 "name": "BaseBdev1", 00:26:03.362 "aliases": [ 00:26:03.362 "5b7f64a0-c0a9-4227-846f-5532e5bc89ac" 00:26:03.362 ], 00:26:03.362 "product_name": "Malloc disk", 00:26:03.362 "block_size": 4096, 00:26:03.362 "num_blocks": 8192, 00:26:03.362 "uuid": "5b7f64a0-c0a9-4227-846f-5532e5bc89ac", 00:26:03.362 "md_size": 32, 00:26:03.362 "md_interleave": false, 00:26:03.362 "dif_type": 0, 00:26:03.362 "assigned_rate_limits": { 00:26:03.362 "rw_ios_per_sec": 0, 00:26:03.362 "rw_mbytes_per_sec": 0, 00:26:03.362 "r_mbytes_per_sec": 0, 00:26:03.362 "w_mbytes_per_sec": 0 00:26:03.362 }, 00:26:03.362 "claimed": true, 00:26:03.362 "claim_type": "exclusive_write", 00:26:03.362 "zoned": false, 00:26:03.362 "supported_io_types": { 00:26:03.362 "read": true, 00:26:03.362 "write": true, 00:26:03.362 "unmap": true, 00:26:03.362 "flush": true, 00:26:03.362 "reset": true, 00:26:03.362 "nvme_admin": false, 00:26:03.362 "nvme_io": false, 00:26:03.362 "nvme_io_md": false, 00:26:03.362 "write_zeroes": true, 00:26:03.362 "zcopy": true, 00:26:03.362 "get_zone_info": false, 00:26:03.362 "zone_management": false, 00:26:03.362 "zone_append": false, 00:26:03.362 "compare": false, 00:26:03.362 "compare_and_write": false, 00:26:03.362 "abort": true, 00:26:03.362 "seek_hole": false, 00:26:03.362 "seek_data": false, 00:26:03.362 "copy": true, 00:26:03.362 "nvme_iov_md": false 00:26:03.362 }, 00:26:03.362 "memory_domains": [ 00:26:03.362 { 00:26:03.362 "dma_device_id": "system", 00:26:03.362 "dma_device_type": 1 00:26:03.362 }, 00:26:03.362 { 00:26:03.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.362 "dma_device_type": 2 00:26:03.362 } 00:26:03.362 ], 00:26:03.362 "driver_specific": {} 00:26:03.362 } 00:26:03.362 ] 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.362 "name": "Existed_Raid", 00:26:03.362 "uuid": "6d0922f2-5229-4a01-b06f-d7e55c7dc2d9", 00:26:03.362 "strip_size_kb": 0, 00:26:03.362 "state": "configuring", 00:26:03.362 "raid_level": "raid1", 00:26:03.362 "superblock": true, 00:26:03.362 "num_base_bdevs": 2, 00:26:03.362 "num_base_bdevs_discovered": 1, 00:26:03.362 "num_base_bdevs_operational": 2, 00:26:03.362 "base_bdevs_list": [ 00:26:03.362 { 00:26:03.362 "name": "BaseBdev1", 00:26:03.362 "uuid": "5b7f64a0-c0a9-4227-846f-5532e5bc89ac", 00:26:03.362 "is_configured": true, 00:26:03.362 "data_offset": 256, 00:26:03.362 "data_size": 7936 00:26:03.362 }, 00:26:03.362 { 00:26:03.362 "name": "BaseBdev2", 00:26:03.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.362 "is_configured": false, 00:26:03.362 "data_offset": 0, 00:26:03.362 "data_size": 0 00:26:03.362 } 00:26:03.362 ] 00:26:03.362 }' 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.362 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:03.931 12:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:04.190 [2024-07-25 12:07:50.201639] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:04.190 [2024-07-25 12:07:50.201675] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16f5810 name Existed_Raid, state configuring 00:26:04.190 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:04.450 [2024-07-25 12:07:50.430282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:04.450 [2024-07-25 12:07:50.431680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:04.450 [2024-07-25 12:07:50.431712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.450 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.709 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.709 "name": "Existed_Raid", 00:26:04.709 "uuid": "c3ee0dbe-d4ea-4d3b-8dbc-853ea8187a85", 00:26:04.709 "strip_size_kb": 0, 00:26:04.709 "state": "configuring", 00:26:04.709 "raid_level": "raid1", 00:26:04.709 "superblock": true, 00:26:04.709 "num_base_bdevs": 2, 00:26:04.709 "num_base_bdevs_discovered": 1, 00:26:04.709 "num_base_bdevs_operational": 2, 00:26:04.709 "base_bdevs_list": [ 00:26:04.709 { 00:26:04.709 "name": "BaseBdev1", 00:26:04.709 "uuid": "5b7f64a0-c0a9-4227-846f-5532e5bc89ac", 00:26:04.709 "is_configured": true, 00:26:04.709 "data_offset": 256, 00:26:04.709 "data_size": 7936 00:26:04.709 }, 00:26:04.709 { 00:26:04.709 "name": "BaseBdev2", 00:26:04.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.709 "is_configured": false, 00:26:04.709 "data_offset": 0, 00:26:04.709 "data_size": 0 00:26:04.709 } 00:26:04.709 ] 00:26:04.709 }' 00:26:04.709 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.709 12:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:05.315 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:26:05.575 [2024-07-25 12:07:51.484797] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.575 [2024-07-25 12:07:51.484924] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16f4f50 00:26:05.575 [2024-07-25 12:07:51.484936] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:05.575 [2024-07-25 12:07:51.484992] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16f4990 00:26:05.575 [2024-07-25 12:07:51.485077] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16f4f50 00:26:05.575 [2024-07-25 12:07:51.485086] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16f4f50 00:26:05.575 [2024-07-25 12:07:51.485157] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.575 BaseBdev2 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:05.575 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:05.833 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:05.833 [ 00:26:05.833 { 00:26:05.833 "name": "BaseBdev2", 00:26:05.833 "aliases": [ 00:26:05.833 "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e" 00:26:05.833 ], 00:26:05.833 "product_name": "Malloc disk", 00:26:05.833 "block_size": 4096, 00:26:05.833 "num_blocks": 8192, 00:26:05.833 "uuid": "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e", 00:26:05.833 "md_size": 32, 00:26:05.833 "md_interleave": false, 00:26:05.833 "dif_type": 0, 00:26:05.833 "assigned_rate_limits": { 00:26:05.833 "rw_ios_per_sec": 0, 00:26:05.833 "rw_mbytes_per_sec": 0, 00:26:05.833 "r_mbytes_per_sec": 0, 00:26:05.833 "w_mbytes_per_sec": 0 00:26:05.833 }, 00:26:05.833 "claimed": true, 00:26:05.833 "claim_type": "exclusive_write", 00:26:05.833 "zoned": false, 00:26:05.833 "supported_io_types": { 00:26:05.833 "read": true, 00:26:05.833 "write": true, 00:26:05.833 "unmap": true, 00:26:05.833 "flush": true, 00:26:05.833 "reset": true, 00:26:05.833 "nvme_admin": false, 00:26:05.833 "nvme_io": false, 00:26:05.833 "nvme_io_md": false, 00:26:05.833 "write_zeroes": true, 00:26:05.833 "zcopy": true, 00:26:05.833 "get_zone_info": false, 00:26:05.833 "zone_management": false, 00:26:05.833 "zone_append": false, 00:26:05.833 "compare": false, 00:26:05.833 "compare_and_write": false, 00:26:05.833 "abort": true, 00:26:05.833 "seek_hole": false, 00:26:05.833 "seek_data": false, 00:26:05.833 "copy": true, 00:26:05.833 "nvme_iov_md": false 00:26:05.833 }, 00:26:05.833 "memory_domains": [ 00:26:05.833 { 00:26:05.833 "dma_device_id": "system", 00:26:05.833 "dma_device_type": 1 00:26:05.833 }, 00:26:05.833 { 00:26:05.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.833 "dma_device_type": 2 00:26:05.833 } 00:26:05.833 ], 00:26:05.833 "driver_specific": {} 00:26:05.833 } 00:26:05.833 ] 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.092 12:07:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.092 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.092 "name": "Existed_Raid", 00:26:06.092 "uuid": "c3ee0dbe-d4ea-4d3b-8dbc-853ea8187a85", 00:26:06.092 "strip_size_kb": 0, 00:26:06.092 "state": "online", 00:26:06.092 "raid_level": "raid1", 00:26:06.092 "superblock": true, 00:26:06.092 "num_base_bdevs": 2, 00:26:06.092 "num_base_bdevs_discovered": 2, 00:26:06.092 "num_base_bdevs_operational": 2, 00:26:06.092 "base_bdevs_list": [ 00:26:06.092 { 00:26:06.092 "name": "BaseBdev1", 00:26:06.092 "uuid": "5b7f64a0-c0a9-4227-846f-5532e5bc89ac", 00:26:06.092 "is_configured": true, 00:26:06.092 "data_offset": 256, 00:26:06.092 "data_size": 7936 00:26:06.092 }, 00:26:06.092 { 00:26:06.092 "name": "BaseBdev2", 00:26:06.092 "uuid": "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e", 00:26:06.092 "is_configured": true, 00:26:06.092 "data_offset": 256, 00:26:06.092 "data_size": 7936 00:26:06.092 } 00:26:06.092 ] 00:26:06.092 }' 00:26:06.092 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.092 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:06.659 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:06.918 [2024-07-25 12:07:52.977188] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:06.918 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:06.918 "name": "Existed_Raid", 00:26:06.918 "aliases": [ 00:26:06.918 "c3ee0dbe-d4ea-4d3b-8dbc-853ea8187a85" 00:26:06.918 ], 00:26:06.918 "product_name": "Raid Volume", 00:26:06.918 "block_size": 4096, 00:26:06.918 "num_blocks": 7936, 00:26:06.918 "uuid": "c3ee0dbe-d4ea-4d3b-8dbc-853ea8187a85", 00:26:06.918 "md_size": 32, 00:26:06.918 "md_interleave": false, 00:26:06.918 "dif_type": 0, 00:26:06.918 "assigned_rate_limits": { 00:26:06.918 "rw_ios_per_sec": 0, 00:26:06.918 "rw_mbytes_per_sec": 0, 00:26:06.918 "r_mbytes_per_sec": 0, 00:26:06.918 "w_mbytes_per_sec": 0 00:26:06.918 }, 00:26:06.918 "claimed": false, 00:26:06.918 "zoned": false, 00:26:06.918 "supported_io_types": { 00:26:06.918 "read": true, 00:26:06.918 "write": true, 00:26:06.918 "unmap": false, 00:26:06.918 "flush": false, 00:26:06.918 "reset": true, 00:26:06.918 "nvme_admin": false, 00:26:06.918 "nvme_io": false, 00:26:06.918 "nvme_io_md": false, 00:26:06.918 "write_zeroes": true, 00:26:06.918 "zcopy": false, 00:26:06.918 "get_zone_info": false, 00:26:06.918 "zone_management": false, 00:26:06.918 "zone_append": false, 00:26:06.918 "compare": false, 00:26:06.918 "compare_and_write": false, 00:26:06.918 "abort": false, 00:26:06.918 "seek_hole": false, 00:26:06.918 "seek_data": false, 00:26:06.918 "copy": false, 00:26:06.918 "nvme_iov_md": false 00:26:06.918 }, 00:26:06.918 "memory_domains": [ 00:26:06.918 { 00:26:06.918 "dma_device_id": "system", 00:26:06.918 "dma_device_type": 1 00:26:06.918 }, 00:26:06.918 { 00:26:06.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.918 "dma_device_type": 2 00:26:06.918 }, 00:26:06.918 { 00:26:06.918 "dma_device_id": "system", 00:26:06.918 "dma_device_type": 1 00:26:06.918 }, 00:26:06.918 { 00:26:06.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.918 "dma_device_type": 2 00:26:06.918 } 00:26:06.918 ], 00:26:06.918 "driver_specific": { 00:26:06.918 "raid": { 00:26:06.918 "uuid": "c3ee0dbe-d4ea-4d3b-8dbc-853ea8187a85", 00:26:06.918 "strip_size_kb": 0, 00:26:06.918 "state": "online", 00:26:06.918 "raid_level": "raid1", 00:26:06.918 "superblock": true, 00:26:06.918 "num_base_bdevs": 2, 00:26:06.918 "num_base_bdevs_discovered": 2, 00:26:06.918 "num_base_bdevs_operational": 2, 00:26:06.918 "base_bdevs_list": [ 00:26:06.918 { 00:26:06.918 "name": "BaseBdev1", 00:26:06.918 "uuid": "5b7f64a0-c0a9-4227-846f-5532e5bc89ac", 00:26:06.918 "is_configured": true, 00:26:06.918 "data_offset": 256, 00:26:06.918 "data_size": 7936 00:26:06.918 }, 00:26:06.918 { 00:26:06.918 "name": "BaseBdev2", 00:26:06.918 "uuid": "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e", 00:26:06.918 "is_configured": true, 00:26:06.918 "data_offset": 256, 00:26:06.918 "data_size": 7936 00:26:06.918 } 00:26:06.918 ] 00:26:06.918 } 00:26:06.918 } 00:26:06.918 }' 00:26:06.918 12:07:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:07.177 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:07.177 BaseBdev2' 00:26:07.177 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:07.177 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:07.177 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:07.177 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:07.177 "name": "BaseBdev1", 00:26:07.177 "aliases": [ 00:26:07.177 "5b7f64a0-c0a9-4227-846f-5532e5bc89ac" 00:26:07.177 ], 00:26:07.177 "product_name": "Malloc disk", 00:26:07.177 "block_size": 4096, 00:26:07.177 "num_blocks": 8192, 00:26:07.177 "uuid": "5b7f64a0-c0a9-4227-846f-5532e5bc89ac", 00:26:07.177 "md_size": 32, 00:26:07.177 "md_interleave": false, 00:26:07.177 "dif_type": 0, 00:26:07.177 "assigned_rate_limits": { 00:26:07.177 "rw_ios_per_sec": 0, 00:26:07.177 "rw_mbytes_per_sec": 0, 00:26:07.177 "r_mbytes_per_sec": 0, 00:26:07.177 "w_mbytes_per_sec": 0 00:26:07.177 }, 00:26:07.177 "claimed": true, 00:26:07.177 "claim_type": "exclusive_write", 00:26:07.177 "zoned": false, 00:26:07.177 "supported_io_types": { 00:26:07.177 "read": true, 00:26:07.177 "write": true, 00:26:07.177 "unmap": true, 00:26:07.177 "flush": true, 00:26:07.177 "reset": true, 00:26:07.178 "nvme_admin": false, 00:26:07.178 "nvme_io": false, 00:26:07.178 "nvme_io_md": false, 00:26:07.178 "write_zeroes": true, 00:26:07.178 "zcopy": true, 00:26:07.178 "get_zone_info": false, 00:26:07.178 "zone_management": false, 00:26:07.178 "zone_append": false, 00:26:07.178 "compare": false, 00:26:07.178 "compare_and_write": false, 00:26:07.178 "abort": true, 00:26:07.178 "seek_hole": false, 00:26:07.178 "seek_data": false, 00:26:07.178 "copy": true, 00:26:07.178 "nvme_iov_md": false 00:26:07.178 }, 00:26:07.178 "memory_domains": [ 00:26:07.178 { 00:26:07.178 "dma_device_id": "system", 00:26:07.178 "dma_device_type": 1 00:26:07.178 }, 00:26:07.178 { 00:26:07.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.178 "dma_device_type": 2 00:26:07.178 } 00:26:07.178 ], 00:26:07.178 "driver_specific": {} 00:26:07.178 }' 00:26:07.178 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:07.438 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.699 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.699 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:07.699 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:07.699 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:07.699 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:07.958 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:07.958 "name": "BaseBdev2", 00:26:07.958 "aliases": [ 00:26:07.958 "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e" 00:26:07.958 ], 00:26:07.958 "product_name": "Malloc disk", 00:26:07.958 "block_size": 4096, 00:26:07.958 "num_blocks": 8192, 00:26:07.958 "uuid": "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e", 00:26:07.958 "md_size": 32, 00:26:07.958 "md_interleave": false, 00:26:07.958 "dif_type": 0, 00:26:07.958 "assigned_rate_limits": { 00:26:07.958 "rw_ios_per_sec": 0, 00:26:07.958 "rw_mbytes_per_sec": 0, 00:26:07.958 "r_mbytes_per_sec": 0, 00:26:07.958 "w_mbytes_per_sec": 0 00:26:07.958 }, 00:26:07.958 "claimed": true, 00:26:07.958 "claim_type": "exclusive_write", 00:26:07.958 "zoned": false, 00:26:07.958 "supported_io_types": { 00:26:07.958 "read": true, 00:26:07.958 "write": true, 00:26:07.958 "unmap": true, 00:26:07.958 "flush": true, 00:26:07.958 "reset": true, 00:26:07.958 "nvme_admin": false, 00:26:07.958 "nvme_io": false, 00:26:07.958 "nvme_io_md": false, 00:26:07.958 "write_zeroes": true, 00:26:07.958 "zcopy": true, 00:26:07.958 "get_zone_info": false, 00:26:07.958 "zone_management": false, 00:26:07.958 "zone_append": false, 00:26:07.958 "compare": false, 00:26:07.958 "compare_and_write": false, 00:26:07.958 "abort": true, 00:26:07.958 "seek_hole": false, 00:26:07.958 "seek_data": false, 00:26:07.958 "copy": true, 00:26:07.958 "nvme_iov_md": false 00:26:07.958 }, 00:26:07.958 "memory_domains": [ 00:26:07.958 { 00:26:07.958 "dma_device_id": "system", 00:26:07.958 "dma_device_type": 1 00:26:07.958 }, 00:26:07.958 { 00:26:07.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.958 "dma_device_type": 2 00:26:07.958 } 00:26:07.958 ], 00:26:07.958 "driver_specific": {} 00:26:07.958 }' 00:26:07.958 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.958 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.958 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:07.958 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.958 12:07:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.958 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:07.958 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.958 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.216 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:08.216 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.216 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.216 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:08.216 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:08.475 [2024-07-25 12:07:54.396725] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.475 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.734 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.734 "name": "Existed_Raid", 00:26:08.734 "uuid": "c3ee0dbe-d4ea-4d3b-8dbc-853ea8187a85", 00:26:08.734 "strip_size_kb": 0, 00:26:08.734 "state": "online", 00:26:08.734 "raid_level": "raid1", 00:26:08.734 "superblock": true, 00:26:08.734 "num_base_bdevs": 2, 00:26:08.734 "num_base_bdevs_discovered": 1, 00:26:08.734 "num_base_bdevs_operational": 1, 00:26:08.734 "base_bdevs_list": [ 00:26:08.734 { 00:26:08.734 "name": null, 00:26:08.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.734 "is_configured": false, 00:26:08.734 "data_offset": 256, 00:26:08.734 "data_size": 7936 00:26:08.734 }, 00:26:08.734 { 00:26:08.734 "name": "BaseBdev2", 00:26:08.734 "uuid": "9c6e3577-c0ab-4ffb-9de1-46dba66fb70e", 00:26:08.734 "is_configured": true, 00:26:08.734 "data_offset": 256, 00:26:08.734 "data_size": 7936 00:26:08.734 } 00:26:08.734 ] 00:26:08.734 }' 00:26:08.734 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.734 12:07:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:09.300 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:09.300 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:09.300 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.300 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:09.559 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:09.559 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:09.559 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:09.559 [2024-07-25 12:07:55.678069] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:09.559 [2024-07-25 12:07:55.678151] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.818 [2024-07-25 12:07:55.689070] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.818 [2024-07-25 12:07:55.689099] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.818 [2024-07-25 12:07:55.689109] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16f4f50 name Existed_Raid, state offline 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 74100 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 74100 ']' 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 74100 00:26:09.818 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74100 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74100' 00:26:10.078 killing process with pid 74100 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 74100 00:26:10.078 [2024-07-25 12:07:55.993961] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.078 12:07:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 74100 00:26:10.078 [2024-07-25 12:07:55.994804] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:10.078 12:07:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:26:10.078 00:26:10.078 real 0m9.930s 00:26:10.078 user 0m17.593s 00:26:10.078 sys 0m1.953s 00:26:10.078 12:07:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.078 12:07:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:10.078 ************************************ 00:26:10.078 END TEST raid_state_function_test_sb_md_separate 00:26:10.078 ************************************ 00:26:10.337 12:07:56 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:26:10.337 12:07:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:10.337 12:07:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.337 12:07:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.337 ************************************ 00:26:10.337 START TEST raid_superblock_test_md_separate 00:26:10.337 ************************************ 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=75922 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 75922 /var/tmp/spdk-raid.sock 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 75922 ']' 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:10.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.337 12:07:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:10.337 [2024-07-25 12:07:56.328510] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:10.337 [2024-07-25 12:07:56.328565] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75922 ] 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.337 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:10.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:10.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.338 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:10.338 [2024-07-25 12:07:56.450898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.597 [2024-07-25 12:07:56.535857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.597 [2024-07-25 12:07:56.603741] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.597 [2024-07-25 12:07:56.603776] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:11.171 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:26:11.430 malloc1 00:26:11.430 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:11.690 [2024-07-25 12:07:57.672905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:11.690 [2024-07-25 12:07:57.672947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.690 [2024-07-25 12:07:57.672967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2480cc0 00:26:11.690 [2024-07-25 12:07:57.672979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.690 [2024-07-25 12:07:57.674392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.690 [2024-07-25 12:07:57.674418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:11.690 pt1 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:11.690 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:26:11.948 malloc2 00:26:11.948 12:07:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:12.207 [2024-07-25 12:07:58.139260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:12.207 [2024-07-25 12:07:58.139300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.207 [2024-07-25 12:07:58.139316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2593b80 00:26:12.207 [2024-07-25 12:07:58.139328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.207 [2024-07-25 12:07:58.140551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.207 [2024-07-25 12:07:58.140581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:12.207 pt2 00:26:12.207 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:12.207 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:12.207 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:12.467 [2024-07-25 12:07:58.351837] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:12.467 [2024-07-25 12:07:58.352955] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:12.467 [2024-07-25 12:07:58.353089] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x24813c0 00:26:12.467 [2024-07-25 12:07:58.353102] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:12.467 [2024-07-25 12:07:58.353174] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25942b0 00:26:12.467 [2024-07-25 12:07:58.353279] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24813c0 00:26:12.467 [2024-07-25 12:07:58.353288] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24813c0 00:26:12.467 [2024-07-25 12:07:58.353350] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.467 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.726 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.726 "name": "raid_bdev1", 00:26:12.726 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:12.726 "strip_size_kb": 0, 00:26:12.726 "state": "online", 00:26:12.726 "raid_level": "raid1", 00:26:12.726 "superblock": true, 00:26:12.726 "num_base_bdevs": 2, 00:26:12.726 "num_base_bdevs_discovered": 2, 00:26:12.726 "num_base_bdevs_operational": 2, 00:26:12.726 "base_bdevs_list": [ 00:26:12.726 { 00:26:12.726 "name": "pt1", 00:26:12.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:12.726 "is_configured": true, 00:26:12.726 "data_offset": 256, 00:26:12.726 "data_size": 7936 00:26:12.726 }, 00:26:12.726 { 00:26:12.726 "name": "pt2", 00:26:12.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:12.726 "is_configured": true, 00:26:12.726 "data_offset": 256, 00:26:12.726 "data_size": 7936 00:26:12.726 } 00:26:12.726 ] 00:26:12.726 }' 00:26:12.726 12:07:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.726 12:07:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:13.293 [2024-07-25 12:07:59.378755] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:13.293 "name": "raid_bdev1", 00:26:13.293 "aliases": [ 00:26:13.293 "3836ffca-2d32-4210-92e8-3a7869bea57b" 00:26:13.293 ], 00:26:13.293 "product_name": "Raid Volume", 00:26:13.293 "block_size": 4096, 00:26:13.293 "num_blocks": 7936, 00:26:13.293 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:13.293 "md_size": 32, 00:26:13.293 "md_interleave": false, 00:26:13.293 "dif_type": 0, 00:26:13.293 "assigned_rate_limits": { 00:26:13.293 "rw_ios_per_sec": 0, 00:26:13.293 "rw_mbytes_per_sec": 0, 00:26:13.293 "r_mbytes_per_sec": 0, 00:26:13.293 "w_mbytes_per_sec": 0 00:26:13.293 }, 00:26:13.293 "claimed": false, 00:26:13.293 "zoned": false, 00:26:13.293 "supported_io_types": { 00:26:13.293 "read": true, 00:26:13.293 "write": true, 00:26:13.293 "unmap": false, 00:26:13.293 "flush": false, 00:26:13.293 "reset": true, 00:26:13.293 "nvme_admin": false, 00:26:13.293 "nvme_io": false, 00:26:13.293 "nvme_io_md": false, 00:26:13.293 "write_zeroes": true, 00:26:13.293 "zcopy": false, 00:26:13.293 "get_zone_info": false, 00:26:13.293 "zone_management": false, 00:26:13.293 "zone_append": false, 00:26:13.293 "compare": false, 00:26:13.293 "compare_and_write": false, 00:26:13.293 "abort": false, 00:26:13.293 "seek_hole": false, 00:26:13.293 "seek_data": false, 00:26:13.293 "copy": false, 00:26:13.293 "nvme_iov_md": false 00:26:13.293 }, 00:26:13.293 "memory_domains": [ 00:26:13.293 { 00:26:13.293 "dma_device_id": "system", 00:26:13.293 "dma_device_type": 1 00:26:13.293 }, 00:26:13.293 { 00:26:13.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.293 "dma_device_type": 2 00:26:13.293 }, 00:26:13.293 { 00:26:13.293 "dma_device_id": "system", 00:26:13.293 "dma_device_type": 1 00:26:13.293 }, 00:26:13.293 { 00:26:13.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.293 "dma_device_type": 2 00:26:13.293 } 00:26:13.293 ], 00:26:13.293 "driver_specific": { 00:26:13.293 "raid": { 00:26:13.293 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:13.293 "strip_size_kb": 0, 00:26:13.293 "state": "online", 00:26:13.293 "raid_level": "raid1", 00:26:13.293 "superblock": true, 00:26:13.293 "num_base_bdevs": 2, 00:26:13.293 "num_base_bdevs_discovered": 2, 00:26:13.293 "num_base_bdevs_operational": 2, 00:26:13.293 "base_bdevs_list": [ 00:26:13.293 { 00:26:13.293 "name": "pt1", 00:26:13.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.293 "is_configured": true, 00:26:13.293 "data_offset": 256, 00:26:13.293 "data_size": 7936 00:26:13.293 }, 00:26:13.293 { 00:26:13.293 "name": "pt2", 00:26:13.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.293 "is_configured": true, 00:26:13.293 "data_offset": 256, 00:26:13.293 "data_size": 7936 00:26:13.293 } 00:26:13.293 ] 00:26:13.293 } 00:26:13.293 } 00:26:13.293 }' 00:26:13.293 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:13.552 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:13.552 pt2' 00:26:13.552 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:13.552 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:13.552 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:13.811 "name": "pt1", 00:26:13.811 "aliases": [ 00:26:13.811 "00000000-0000-0000-0000-000000000001" 00:26:13.811 ], 00:26:13.811 "product_name": "passthru", 00:26:13.811 "block_size": 4096, 00:26:13.811 "num_blocks": 8192, 00:26:13.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.811 "md_size": 32, 00:26:13.811 "md_interleave": false, 00:26:13.811 "dif_type": 0, 00:26:13.811 "assigned_rate_limits": { 00:26:13.811 "rw_ios_per_sec": 0, 00:26:13.811 "rw_mbytes_per_sec": 0, 00:26:13.811 "r_mbytes_per_sec": 0, 00:26:13.811 "w_mbytes_per_sec": 0 00:26:13.811 }, 00:26:13.811 "claimed": true, 00:26:13.811 "claim_type": "exclusive_write", 00:26:13.811 "zoned": false, 00:26:13.811 "supported_io_types": { 00:26:13.811 "read": true, 00:26:13.811 "write": true, 00:26:13.811 "unmap": true, 00:26:13.811 "flush": true, 00:26:13.811 "reset": true, 00:26:13.811 "nvme_admin": false, 00:26:13.811 "nvme_io": false, 00:26:13.811 "nvme_io_md": false, 00:26:13.811 "write_zeroes": true, 00:26:13.811 "zcopy": true, 00:26:13.811 "get_zone_info": false, 00:26:13.811 "zone_management": false, 00:26:13.811 "zone_append": false, 00:26:13.811 "compare": false, 00:26:13.811 "compare_and_write": false, 00:26:13.811 "abort": true, 00:26:13.811 "seek_hole": false, 00:26:13.811 "seek_data": false, 00:26:13.811 "copy": true, 00:26:13.811 "nvme_iov_md": false 00:26:13.811 }, 00:26:13.811 "memory_domains": [ 00:26:13.811 { 00:26:13.811 "dma_device_id": "system", 00:26:13.811 "dma_device_type": 1 00:26:13.811 }, 00:26:13.811 { 00:26:13.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.811 "dma_device_type": 2 00:26:13.811 } 00:26:13.811 ], 00:26:13.811 "driver_specific": { 00:26:13.811 "passthru": { 00:26:13.811 "name": "pt1", 00:26:13.811 "base_bdev_name": "malloc1" 00:26:13.811 } 00:26:13.811 } 00:26:13.811 }' 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:13.811 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:14.070 12:07:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:14.070 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:14.070 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:14.070 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:14.071 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:14.329 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:14.329 "name": "pt2", 00:26:14.329 "aliases": [ 00:26:14.329 "00000000-0000-0000-0000-000000000002" 00:26:14.329 ], 00:26:14.329 "product_name": "passthru", 00:26:14.329 "block_size": 4096, 00:26:14.329 "num_blocks": 8192, 00:26:14.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.329 "md_size": 32, 00:26:14.329 "md_interleave": false, 00:26:14.329 "dif_type": 0, 00:26:14.329 "assigned_rate_limits": { 00:26:14.329 "rw_ios_per_sec": 0, 00:26:14.329 "rw_mbytes_per_sec": 0, 00:26:14.329 "r_mbytes_per_sec": 0, 00:26:14.329 "w_mbytes_per_sec": 0 00:26:14.329 }, 00:26:14.329 "claimed": true, 00:26:14.329 "claim_type": "exclusive_write", 00:26:14.329 "zoned": false, 00:26:14.329 "supported_io_types": { 00:26:14.329 "read": true, 00:26:14.329 "write": true, 00:26:14.329 "unmap": true, 00:26:14.329 "flush": true, 00:26:14.329 "reset": true, 00:26:14.329 "nvme_admin": false, 00:26:14.329 "nvme_io": false, 00:26:14.329 "nvme_io_md": false, 00:26:14.329 "write_zeroes": true, 00:26:14.329 "zcopy": true, 00:26:14.329 "get_zone_info": false, 00:26:14.329 "zone_management": false, 00:26:14.329 "zone_append": false, 00:26:14.330 "compare": false, 00:26:14.330 "compare_and_write": false, 00:26:14.330 "abort": true, 00:26:14.330 "seek_hole": false, 00:26:14.330 "seek_data": false, 00:26:14.330 "copy": true, 00:26:14.330 "nvme_iov_md": false 00:26:14.330 }, 00:26:14.330 "memory_domains": [ 00:26:14.330 { 00:26:14.330 "dma_device_id": "system", 00:26:14.330 "dma_device_type": 1 00:26:14.330 }, 00:26:14.330 { 00:26:14.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.330 "dma_device_type": 2 00:26:14.330 } 00:26:14.330 ], 00:26:14.330 "driver_specific": { 00:26:14.330 "passthru": { 00:26:14.330 "name": "pt2", 00:26:14.330 "base_bdev_name": "malloc2" 00:26:14.330 } 00:26:14.330 } 00:26:14.330 }' 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:14.330 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:14.588 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:14.847 [2024-07-25 12:08:00.790478] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.847 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3836ffca-2d32-4210-92e8-3a7869bea57b 00:26:14.847 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 3836ffca-2d32-4210-92e8-3a7869bea57b ']' 00:26:14.847 12:08:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:15.105 [2024-07-25 12:08:01.018845] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:15.105 [2024-07-25 12:08:01.018863] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:15.105 [2024-07-25 12:08:01.018913] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:15.105 [2024-07-25 12:08:01.018961] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:15.105 [2024-07-25 12:08:01.018971] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24813c0 name raid_bdev1, state offline 00:26:15.105 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.105 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:15.363 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:15.363 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:15.364 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:15.364 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:15.622 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:15.622 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:15.622 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:15.622 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:15.881 12:08:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:16.141 [2024-07-25 12:08:02.165813] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:16.141 [2024-07-25 12:08:02.167055] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:16.141 [2024-07-25 12:08:02.167106] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:16.141 [2024-07-25 12:08:02.167152] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:16.141 [2024-07-25 12:08:02.167170] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.141 [2024-07-25 12:08:02.167179] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25952b0 name raid_bdev1, state configuring 00:26:16.141 request: 00:26:16.141 { 00:26:16.141 "name": "raid_bdev1", 00:26:16.141 "raid_level": "raid1", 00:26:16.141 "base_bdevs": [ 00:26:16.141 "malloc1", 00:26:16.141 "malloc2" 00:26:16.141 ], 00:26:16.141 "superblock": false, 00:26:16.141 "method": "bdev_raid_create", 00:26:16.141 "req_id": 1 00:26:16.141 } 00:26:16.141 Got JSON-RPC error response 00:26:16.141 response: 00:26:16.141 { 00:26:16.141 "code": -17, 00:26:16.141 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:16.141 } 00:26:16.141 12:08:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:26:16.141 12:08:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:16.141 12:08:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:16.141 12:08:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:16.141 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:16.141 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.400 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:16.400 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:16.400 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:16.658 [2024-07-25 12:08:02.622965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:16.658 [2024-07-25 12:08:02.623010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.658 [2024-07-25 12:08:02.623031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23fec30 00:26:16.658 [2024-07-25 12:08:02.623042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.658 [2024-07-25 12:08:02.624379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.658 [2024-07-25 12:08:02.624405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:16.658 [2024-07-25 12:08:02.624447] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:16.658 [2024-07-25 12:08:02.624472] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:16.658 pt1 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.658 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.659 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.659 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.917 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.917 "name": "raid_bdev1", 00:26:16.917 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:16.917 "strip_size_kb": 0, 00:26:16.917 "state": "configuring", 00:26:16.917 "raid_level": "raid1", 00:26:16.917 "superblock": true, 00:26:16.917 "num_base_bdevs": 2, 00:26:16.917 "num_base_bdevs_discovered": 1, 00:26:16.917 "num_base_bdevs_operational": 2, 00:26:16.917 "base_bdevs_list": [ 00:26:16.917 { 00:26:16.917 "name": "pt1", 00:26:16.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:16.917 "is_configured": true, 00:26:16.917 "data_offset": 256, 00:26:16.917 "data_size": 7936 00:26:16.917 }, 00:26:16.917 { 00:26:16.917 "name": null, 00:26:16.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.917 "is_configured": false, 00:26:16.917 "data_offset": 256, 00:26:16.917 "data_size": 7936 00:26:16.917 } 00:26:16.917 ] 00:26:16.917 }' 00:26:16.917 12:08:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.917 12:08:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:17.485 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:26:17.485 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:17.485 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:17.485 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:17.743 [2024-07-25 12:08:03.661718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:17.743 [2024-07-25 12:08:03.661760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.743 [2024-07-25 12:08:03.661777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25971c0 00:26:17.743 [2024-07-25 12:08:03.661788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.743 [2024-07-25 12:08:03.661967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.743 [2024-07-25 12:08:03.661983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:17.743 [2024-07-25 12:08:03.662022] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:17.743 [2024-07-25 12:08:03.662039] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:17.743 [2024-07-25 12:08:03.662122] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2596850 00:26:17.743 [2024-07-25 12:08:03.662131] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:17.743 [2024-07-25 12:08:03.662192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2598360 00:26:17.743 [2024-07-25 12:08:03.662285] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2596850 00:26:17.743 [2024-07-25 12:08:03.662294] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2596850 00:26:17.743 [2024-07-25 12:08:03.662358] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.743 pt2 00:26:17.743 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:17.743 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:17.743 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.744 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.002 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:18.002 "name": "raid_bdev1", 00:26:18.002 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:18.002 "strip_size_kb": 0, 00:26:18.002 "state": "online", 00:26:18.002 "raid_level": "raid1", 00:26:18.002 "superblock": true, 00:26:18.002 "num_base_bdevs": 2, 00:26:18.002 "num_base_bdevs_discovered": 2, 00:26:18.002 "num_base_bdevs_operational": 2, 00:26:18.002 "base_bdevs_list": [ 00:26:18.002 { 00:26:18.002 "name": "pt1", 00:26:18.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:18.002 "is_configured": true, 00:26:18.002 "data_offset": 256, 00:26:18.002 "data_size": 7936 00:26:18.002 }, 00:26:18.002 { 00:26:18.002 "name": "pt2", 00:26:18.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:18.002 "is_configured": true, 00:26:18.002 "data_offset": 256, 00:26:18.002 "data_size": 7936 00:26:18.002 } 00:26:18.002 ] 00:26:18.002 }' 00:26:18.002 12:08:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:18.002 12:08:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:18.568 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:18.857 [2024-07-25 12:08:04.700690] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:18.857 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:18.857 "name": "raid_bdev1", 00:26:18.857 "aliases": [ 00:26:18.857 "3836ffca-2d32-4210-92e8-3a7869bea57b" 00:26:18.857 ], 00:26:18.857 "product_name": "Raid Volume", 00:26:18.857 "block_size": 4096, 00:26:18.857 "num_blocks": 7936, 00:26:18.857 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:18.857 "md_size": 32, 00:26:18.857 "md_interleave": false, 00:26:18.857 "dif_type": 0, 00:26:18.857 "assigned_rate_limits": { 00:26:18.857 "rw_ios_per_sec": 0, 00:26:18.857 "rw_mbytes_per_sec": 0, 00:26:18.857 "r_mbytes_per_sec": 0, 00:26:18.857 "w_mbytes_per_sec": 0 00:26:18.857 }, 00:26:18.857 "claimed": false, 00:26:18.857 "zoned": false, 00:26:18.857 "supported_io_types": { 00:26:18.857 "read": true, 00:26:18.857 "write": true, 00:26:18.857 "unmap": false, 00:26:18.857 "flush": false, 00:26:18.857 "reset": true, 00:26:18.858 "nvme_admin": false, 00:26:18.858 "nvme_io": false, 00:26:18.858 "nvme_io_md": false, 00:26:18.858 "write_zeroes": true, 00:26:18.858 "zcopy": false, 00:26:18.858 "get_zone_info": false, 00:26:18.858 "zone_management": false, 00:26:18.858 "zone_append": false, 00:26:18.858 "compare": false, 00:26:18.858 "compare_and_write": false, 00:26:18.858 "abort": false, 00:26:18.858 "seek_hole": false, 00:26:18.858 "seek_data": false, 00:26:18.858 "copy": false, 00:26:18.858 "nvme_iov_md": false 00:26:18.858 }, 00:26:18.858 "memory_domains": [ 00:26:18.858 { 00:26:18.858 "dma_device_id": "system", 00:26:18.858 "dma_device_type": 1 00:26:18.858 }, 00:26:18.858 { 00:26:18.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.858 "dma_device_type": 2 00:26:18.858 }, 00:26:18.858 { 00:26:18.858 "dma_device_id": "system", 00:26:18.858 "dma_device_type": 1 00:26:18.858 }, 00:26:18.858 { 00:26:18.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.858 "dma_device_type": 2 00:26:18.858 } 00:26:18.858 ], 00:26:18.858 "driver_specific": { 00:26:18.858 "raid": { 00:26:18.858 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:18.858 "strip_size_kb": 0, 00:26:18.858 "state": "online", 00:26:18.858 "raid_level": "raid1", 00:26:18.858 "superblock": true, 00:26:18.858 "num_base_bdevs": 2, 00:26:18.858 "num_base_bdevs_discovered": 2, 00:26:18.858 "num_base_bdevs_operational": 2, 00:26:18.858 "base_bdevs_list": [ 00:26:18.858 { 00:26:18.858 "name": "pt1", 00:26:18.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:18.858 "is_configured": true, 00:26:18.858 "data_offset": 256, 00:26:18.858 "data_size": 7936 00:26:18.858 }, 00:26:18.858 { 00:26:18.858 "name": "pt2", 00:26:18.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:18.858 "is_configured": true, 00:26:18.858 "data_offset": 256, 00:26:18.858 "data_size": 7936 00:26:18.858 } 00:26:18.858 ] 00:26:18.858 } 00:26:18.858 } 00:26:18.858 }' 00:26:18.858 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:18.858 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:18.858 pt2' 00:26:18.858 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.858 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:18.858 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:19.116 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:19.116 "name": "pt1", 00:26:19.116 "aliases": [ 00:26:19.116 "00000000-0000-0000-0000-000000000001" 00:26:19.116 ], 00:26:19.116 "product_name": "passthru", 00:26:19.116 "block_size": 4096, 00:26:19.116 "num_blocks": 8192, 00:26:19.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:19.116 "md_size": 32, 00:26:19.116 "md_interleave": false, 00:26:19.116 "dif_type": 0, 00:26:19.116 "assigned_rate_limits": { 00:26:19.116 "rw_ios_per_sec": 0, 00:26:19.116 "rw_mbytes_per_sec": 0, 00:26:19.116 "r_mbytes_per_sec": 0, 00:26:19.116 "w_mbytes_per_sec": 0 00:26:19.116 }, 00:26:19.116 "claimed": true, 00:26:19.116 "claim_type": "exclusive_write", 00:26:19.116 "zoned": false, 00:26:19.116 "supported_io_types": { 00:26:19.116 "read": true, 00:26:19.116 "write": true, 00:26:19.116 "unmap": true, 00:26:19.116 "flush": true, 00:26:19.116 "reset": true, 00:26:19.116 "nvme_admin": false, 00:26:19.116 "nvme_io": false, 00:26:19.116 "nvme_io_md": false, 00:26:19.116 "write_zeroes": true, 00:26:19.116 "zcopy": true, 00:26:19.116 "get_zone_info": false, 00:26:19.116 "zone_management": false, 00:26:19.116 "zone_append": false, 00:26:19.116 "compare": false, 00:26:19.116 "compare_and_write": false, 00:26:19.116 "abort": true, 00:26:19.116 "seek_hole": false, 00:26:19.116 "seek_data": false, 00:26:19.116 "copy": true, 00:26:19.116 "nvme_iov_md": false 00:26:19.116 }, 00:26:19.116 "memory_domains": [ 00:26:19.116 { 00:26:19.116 "dma_device_id": "system", 00:26:19.116 "dma_device_type": 1 00:26:19.116 }, 00:26:19.116 { 00:26:19.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.116 "dma_device_type": 2 00:26:19.117 } 00:26:19.117 ], 00:26:19.117 "driver_specific": { 00:26:19.117 "passthru": { 00:26:19.117 "name": "pt1", 00:26:19.117 "base_bdev_name": "malloc1" 00:26:19.117 } 00:26:19.117 } 00:26:19.117 }' 00:26:19.117 12:08:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.117 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:19.375 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:19.634 "name": "pt2", 00:26:19.634 "aliases": [ 00:26:19.634 "00000000-0000-0000-0000-000000000002" 00:26:19.634 ], 00:26:19.634 "product_name": "passthru", 00:26:19.634 "block_size": 4096, 00:26:19.634 "num_blocks": 8192, 00:26:19.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:19.634 "md_size": 32, 00:26:19.634 "md_interleave": false, 00:26:19.634 "dif_type": 0, 00:26:19.634 "assigned_rate_limits": { 00:26:19.634 "rw_ios_per_sec": 0, 00:26:19.634 "rw_mbytes_per_sec": 0, 00:26:19.634 "r_mbytes_per_sec": 0, 00:26:19.634 "w_mbytes_per_sec": 0 00:26:19.634 }, 00:26:19.634 "claimed": true, 00:26:19.634 "claim_type": "exclusive_write", 00:26:19.634 "zoned": false, 00:26:19.634 "supported_io_types": { 00:26:19.634 "read": true, 00:26:19.634 "write": true, 00:26:19.634 "unmap": true, 00:26:19.634 "flush": true, 00:26:19.634 "reset": true, 00:26:19.634 "nvme_admin": false, 00:26:19.634 "nvme_io": false, 00:26:19.634 "nvme_io_md": false, 00:26:19.634 "write_zeroes": true, 00:26:19.634 "zcopy": true, 00:26:19.634 "get_zone_info": false, 00:26:19.634 "zone_management": false, 00:26:19.634 "zone_append": false, 00:26:19.634 "compare": false, 00:26:19.634 "compare_and_write": false, 00:26:19.634 "abort": true, 00:26:19.634 "seek_hole": false, 00:26:19.634 "seek_data": false, 00:26:19.634 "copy": true, 00:26:19.634 "nvme_iov_md": false 00:26:19.634 }, 00:26:19.634 "memory_domains": [ 00:26:19.634 { 00:26:19.634 "dma_device_id": "system", 00:26:19.634 "dma_device_type": 1 00:26:19.634 }, 00:26:19.634 { 00:26:19.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.634 "dma_device_type": 2 00:26:19.634 } 00:26:19.634 ], 00:26:19.634 "driver_specific": { 00:26:19.634 "passthru": { 00:26:19.634 "name": "pt2", 00:26:19.634 "base_bdev_name": "malloc2" 00:26:19.634 } 00:26:19.634 } 00:26:19.634 }' 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:19.634 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.941 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.941 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:19.941 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.941 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.942 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:19.942 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:19.942 12:08:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:20.200 [2024-07-25 12:08:06.120441] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:20.200 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 3836ffca-2d32-4210-92e8-3a7869bea57b '!=' 3836ffca-2d32-4210-92e8-3a7869bea57b ']' 00:26:20.200 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:20.200 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:20.200 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:26:20.200 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:20.459 [2024-07-25 12:08:06.348834] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.459 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.729 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.729 "name": "raid_bdev1", 00:26:20.729 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:20.729 "strip_size_kb": 0, 00:26:20.729 "state": "online", 00:26:20.729 "raid_level": "raid1", 00:26:20.729 "superblock": true, 00:26:20.729 "num_base_bdevs": 2, 00:26:20.729 "num_base_bdevs_discovered": 1, 00:26:20.729 "num_base_bdevs_operational": 1, 00:26:20.729 "base_bdevs_list": [ 00:26:20.729 { 00:26:20.729 "name": null, 00:26:20.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.729 "is_configured": false, 00:26:20.729 "data_offset": 256, 00:26:20.729 "data_size": 7936 00:26:20.729 }, 00:26:20.729 { 00:26:20.729 "name": "pt2", 00:26:20.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:20.729 "is_configured": true, 00:26:20.729 "data_offset": 256, 00:26:20.729 "data_size": 7936 00:26:20.729 } 00:26:20.729 ] 00:26:20.729 }' 00:26:20.729 12:08:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.729 12:08:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:21.297 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:21.297 [2024-07-25 12:08:07.375506] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:21.297 [2024-07-25 12:08:07.375531] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.297 [2024-07-25 12:08:07.375577] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.297 [2024-07-25 12:08:07.375618] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.297 [2024-07-25 12:08:07.375629] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2596850 name raid_bdev1, state offline 00:26:21.297 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.297 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:26:21.556 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:26:21.556 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:26:21.556 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:26:21.556 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:21.556 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:21.814 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:21.814 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:21.814 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:26:21.814 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:21.815 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:26:21.815 12:08:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:22.073 [2024-07-25 12:08:08.053256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:22.073 [2024-07-25 12:08:08.053296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.073 [2024-07-25 12:08:08.053311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2596f50 00:26:22.073 [2024-07-25 12:08:08.053323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.073 [2024-07-25 12:08:08.054666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.073 [2024-07-25 12:08:08.054693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:22.073 [2024-07-25 12:08:08.054737] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:22.073 [2024-07-25 12:08:08.054762] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:22.073 [2024-07-25 12:08:08.054832] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2594600 00:26:22.073 [2024-07-25 12:08:08.054842] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:22.073 [2024-07-25 12:08:08.054891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23ffbe0 00:26:22.073 [2024-07-25 12:08:08.054978] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2594600 00:26:22.073 [2024-07-25 12:08:08.054987] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2594600 00:26:22.073 [2024-07-25 12:08:08.055054] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.073 pt2 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.073 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.332 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.332 "name": "raid_bdev1", 00:26:22.332 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:22.332 "strip_size_kb": 0, 00:26:22.332 "state": "online", 00:26:22.332 "raid_level": "raid1", 00:26:22.332 "superblock": true, 00:26:22.332 "num_base_bdevs": 2, 00:26:22.332 "num_base_bdevs_discovered": 1, 00:26:22.332 "num_base_bdevs_operational": 1, 00:26:22.332 "base_bdevs_list": [ 00:26:22.332 { 00:26:22.332 "name": null, 00:26:22.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.332 "is_configured": false, 00:26:22.332 "data_offset": 256, 00:26:22.332 "data_size": 7936 00:26:22.332 }, 00:26:22.332 { 00:26:22.332 "name": "pt2", 00:26:22.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:22.332 "is_configured": true, 00:26:22.332 "data_offset": 256, 00:26:22.332 "data_size": 7936 00:26:22.332 } 00:26:22.332 ] 00:26:22.332 }' 00:26:22.332 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.332 12:08:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:22.899 12:08:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:23.158 [2024-07-25 12:08:09.075949] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:23.158 [2024-07-25 12:08:09.075971] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:23.158 [2024-07-25 12:08:09.076015] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:23.158 [2024-07-25 12:08:09.076053] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:23.158 [2024-07-25 12:08:09.076064] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2594600 name raid_bdev1, state offline 00:26:23.158 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.158 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:26:23.417 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:26:23.417 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:26:23.417 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:26:23.417 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:23.417 [2024-07-25 12:08:09.533144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:23.417 [2024-07-25 12:08:09.533188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.417 [2024-07-25 12:08:09.533206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ff180 00:26:23.417 [2024-07-25 12:08:09.533217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.417 [2024-07-25 12:08:09.534545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.417 [2024-07-25 12:08:09.534570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:23.417 [2024-07-25 12:08:09.534611] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:23.417 [2024-07-25 12:08:09.534635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:23.417 [2024-07-25 12:08:09.534714] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:23.417 [2024-07-25 12:08:09.534726] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:23.417 [2024-07-25 12:08:09.534738] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x259a7a0 name raid_bdev1, state configuring 00:26:23.417 [2024-07-25 12:08:09.534757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:23.417 [2024-07-25 12:08:09.534801] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2597840 00:26:23.417 [2024-07-25 12:08:09.534810] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:23.417 [2024-07-25 12:08:09.534862] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2598b90 00:26:23.417 [2024-07-25 12:08:09.534949] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2597840 00:26:23.417 [2024-07-25 12:08:09.534958] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2597840 00:26:23.417 [2024-07-25 12:08:09.535021] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.675 pt1 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.675 "name": "raid_bdev1", 00:26:23.675 "uuid": "3836ffca-2d32-4210-92e8-3a7869bea57b", 00:26:23.675 "strip_size_kb": 0, 00:26:23.675 "state": "online", 00:26:23.675 "raid_level": "raid1", 00:26:23.675 "superblock": true, 00:26:23.675 "num_base_bdevs": 2, 00:26:23.675 "num_base_bdevs_discovered": 1, 00:26:23.675 "num_base_bdevs_operational": 1, 00:26:23.675 "base_bdevs_list": [ 00:26:23.675 { 00:26:23.675 "name": null, 00:26:23.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.675 "is_configured": false, 00:26:23.675 "data_offset": 256, 00:26:23.675 "data_size": 7936 00:26:23.675 }, 00:26:23.675 { 00:26:23.675 "name": "pt2", 00:26:23.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.675 "is_configured": true, 00:26:23.675 "data_offset": 256, 00:26:23.675 "data_size": 7936 00:26:23.675 } 00:26:23.675 ] 00:26:23.675 }' 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.675 12:08:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:24.610 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:24.610 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:24.610 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:24.610 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:24.610 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:24.869 [2024-07-25 12:08:10.804729] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 3836ffca-2d32-4210-92e8-3a7869bea57b '!=' 3836ffca-2d32-4210-92e8-3a7869bea57b ']' 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 75922 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 75922 ']' 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 75922 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75922 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:24.869 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75922' 00:26:24.870 killing process with pid 75922 00:26:24.870 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 75922 00:26:24.870 [2024-07-25 12:08:10.868296] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:24.870 [2024-07-25 12:08:10.868342] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:24.870 [2024-07-25 12:08:10.868383] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:24.870 [2024-07-25 12:08:10.868393] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2597840 name raid_bdev1, state offline 00:26:24.870 12:08:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 75922 00:26:24.870 [2024-07-25 12:08:10.889362] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:25.129 12:08:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:26:25.129 00:26:25.129 real 0m14.807s 00:26:25.129 user 0m26.830s 00:26:25.129 sys 0m2.743s 00:26:25.129 12:08:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.129 12:08:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:25.129 ************************************ 00:26:25.129 END TEST raid_superblock_test_md_separate 00:26:25.129 ************************************ 00:26:25.129 12:08:11 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:26:25.129 12:08:11 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:26:25.129 12:08:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:25.129 12:08:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.129 12:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:25.129 ************************************ 00:26:25.129 START TEST raid_rebuild_test_sb_md_separate 00:26:25.129 ************************************ 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=78654 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 78654 /var/tmp/spdk-raid.sock 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 78654 ']' 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:25.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.129 12:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:25.129 [2024-07-25 12:08:11.230614] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:25.129 [2024-07-25 12:08:11.230676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78654 ] 00:26:25.129 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:25.129 Zero copy mechanism will not be used. 00:26:25.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:25.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.389 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:25.389 [2024-07-25 12:08:11.350130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.389 [2024-07-25 12:08:11.436878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.389 [2024-07-25 12:08:11.503238] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:25.389 [2024-07-25 12:08:11.503273] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:26.325 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.325 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:26:26.325 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:26.325 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:26:26.325 BaseBdev1_malloc 00:26:26.325 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:26.584 [2024-07-25 12:08:12.557930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:26.584 [2024-07-25 12:08:12.557973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.584 [2024-07-25 12:08:12.557995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164ffc0 00:26:26.584 [2024-07-25 12:08:12.558007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.584 [2024-07-25 12:08:12.559446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.584 [2024-07-25 12:08:12.559474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:26.584 BaseBdev1 00:26:26.584 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:26.584 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:26:26.843 BaseBdev2_malloc 00:26:26.843 12:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:27.101 [2024-07-25 12:08:13.008253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:27.101 [2024-07-25 12:08:13.008295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.101 [2024-07-25 12:08:13.008313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17631f0 00:26:27.101 [2024-07-25 12:08:13.008325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.101 [2024-07-25 12:08:13.009570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.101 [2024-07-25 12:08:13.009595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:27.101 BaseBdev2 00:26:27.101 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:26:27.360 spare_malloc 00:26:27.360 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:27.360 spare_delay 00:26:27.618 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:27.618 [2024-07-25 12:08:13.691079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:27.618 [2024-07-25 12:08:13.691118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.618 [2024-07-25 12:08:13.691143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1766230 00:26:27.618 [2024-07-25 12:08:13.691161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.618 [2024-07-25 12:08:13.692412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.618 [2024-07-25 12:08:13.692439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:27.618 spare 00:26:27.618 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:27.876 [2024-07-25 12:08:13.903672] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:27.876 [2024-07-25 12:08:13.904783] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:27.876 [2024-07-25 12:08:13.904935] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1766fb0 00:26:27.876 [2024-07-25 12:08:13.904947] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:27.876 [2024-07-25 12:08:13.905008] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ce210 00:26:27.876 [2024-07-25 12:08:13.905110] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1766fb0 00:26:27.876 [2024-07-25 12:08:13.905120] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1766fb0 00:26:27.876 [2024-07-25 12:08:13.905188] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:27.876 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.877 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.877 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.877 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.877 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.877 12:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.136 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.136 "name": "raid_bdev1", 00:26:28.136 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:28.136 "strip_size_kb": 0, 00:26:28.136 "state": "online", 00:26:28.136 "raid_level": "raid1", 00:26:28.136 "superblock": true, 00:26:28.136 "num_base_bdevs": 2, 00:26:28.136 "num_base_bdevs_discovered": 2, 00:26:28.136 "num_base_bdevs_operational": 2, 00:26:28.136 "base_bdevs_list": [ 00:26:28.136 { 00:26:28.136 "name": "BaseBdev1", 00:26:28.136 "uuid": "77205868-b723-56f6-a010-1b3bc1f09fc8", 00:26:28.136 "is_configured": true, 00:26:28.136 "data_offset": 256, 00:26:28.136 "data_size": 7936 00:26:28.136 }, 00:26:28.136 { 00:26:28.136 "name": "BaseBdev2", 00:26:28.136 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:28.136 "is_configured": true, 00:26:28.136 "data_offset": 256, 00:26:28.136 "data_size": 7936 00:26:28.136 } 00:26:28.136 ] 00:26:28.136 }' 00:26:28.136 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.136 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:28.702 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:28.702 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:28.960 [2024-07-25 12:08:14.942621] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:28.960 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:26:28.960 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.960 12:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:29.218 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:29.476 [2024-07-25 12:08:15.383597] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x176a780 00:26:29.476 /dev/nbd0 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:29.476 1+0 records in 00:26:29.476 1+0 records out 00:26:29.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220904 s, 18.5 MB/s 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:29.476 12:08:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:26:30.042 7936+0 records in 00:26:30.042 7936+0 records out 00:26:30.042 32505856 bytes (33 MB, 31 MiB) copied, 0.668454 s, 48.6 MB/s 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:30.042 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:30.302 [2024-07-25 12:08:16.359205] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:30.302 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:30.560 [2024-07-25 12:08:16.627950] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:30.560 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:30.560 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:30.560 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.561 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.819 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.819 "name": "raid_bdev1", 00:26:30.819 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:30.819 "strip_size_kb": 0, 00:26:30.819 "state": "online", 00:26:30.819 "raid_level": "raid1", 00:26:30.819 "superblock": true, 00:26:30.819 "num_base_bdevs": 2, 00:26:30.819 "num_base_bdevs_discovered": 1, 00:26:30.819 "num_base_bdevs_operational": 1, 00:26:30.819 "base_bdevs_list": [ 00:26:30.819 { 00:26:30.819 "name": null, 00:26:30.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.819 "is_configured": false, 00:26:30.819 "data_offset": 256, 00:26:30.819 "data_size": 7936 00:26:30.819 }, 00:26:30.819 { 00:26:30.819 "name": "BaseBdev2", 00:26:30.819 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:30.819 "is_configured": true, 00:26:30.819 "data_offset": 256, 00:26:30.819 "data_size": 7936 00:26:30.819 } 00:26:30.819 ] 00:26:30.819 }' 00:26:30.819 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.819 12:08:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:31.386 12:08:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:31.644 [2024-07-25 12:08:17.666771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:31.644 [2024-07-25 12:08:17.668971] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1764ef0 00:26:31.644 [2024-07-25 12:08:17.670999] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:31.644 12:08:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.579 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.837 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.837 "name": "raid_bdev1", 00:26:32.837 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:32.837 "strip_size_kb": 0, 00:26:32.837 "state": "online", 00:26:32.837 "raid_level": "raid1", 00:26:32.837 "superblock": true, 00:26:32.837 "num_base_bdevs": 2, 00:26:32.837 "num_base_bdevs_discovered": 2, 00:26:32.837 "num_base_bdevs_operational": 2, 00:26:32.837 "process": { 00:26:32.837 "type": "rebuild", 00:26:32.837 "target": "spare", 00:26:32.837 "progress": { 00:26:32.837 "blocks": 3072, 00:26:32.837 "percent": 38 00:26:32.837 } 00:26:32.837 }, 00:26:32.837 "base_bdevs_list": [ 00:26:32.837 { 00:26:32.837 "name": "spare", 00:26:32.837 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:32.837 "is_configured": true, 00:26:32.837 "data_offset": 256, 00:26:32.837 "data_size": 7936 00:26:32.837 }, 00:26:32.837 { 00:26:32.837 "name": "BaseBdev2", 00:26:32.837 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:32.837 "is_configured": true, 00:26:32.837 "data_offset": 256, 00:26:32.837 "data_size": 7936 00:26:32.837 } 00:26:32.837 ] 00:26:32.837 }' 00:26:32.837 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:33.095 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:33.095 12:08:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:33.095 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:33.095 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:33.095 [2024-07-25 12:08:19.212076] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:33.355 [2024-07-25 12:08:19.282882] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:33.355 [2024-07-25 12:08:19.282933] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.355 [2024-07-25 12:08:19.282947] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:33.355 [2024-07-25 12:08:19.282955] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.355 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.676 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.676 "name": "raid_bdev1", 00:26:33.676 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:33.676 "strip_size_kb": 0, 00:26:33.676 "state": "online", 00:26:33.676 "raid_level": "raid1", 00:26:33.676 "superblock": true, 00:26:33.676 "num_base_bdevs": 2, 00:26:33.676 "num_base_bdevs_discovered": 1, 00:26:33.676 "num_base_bdevs_operational": 1, 00:26:33.676 "base_bdevs_list": [ 00:26:33.676 { 00:26:33.677 "name": null, 00:26:33.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.677 "is_configured": false, 00:26:33.677 "data_offset": 256, 00:26:33.677 "data_size": 7936 00:26:33.677 }, 00:26:33.677 { 00:26:33.677 "name": "BaseBdev2", 00:26:33.677 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:33.677 "is_configured": true, 00:26:33.677 "data_offset": 256, 00:26:33.677 "data_size": 7936 00:26:33.677 } 00:26:33.677 ] 00:26:33.677 }' 00:26:33.677 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.677 12:08:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.244 "name": "raid_bdev1", 00:26:34.244 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:34.244 "strip_size_kb": 0, 00:26:34.244 "state": "online", 00:26:34.244 "raid_level": "raid1", 00:26:34.244 "superblock": true, 00:26:34.244 "num_base_bdevs": 2, 00:26:34.244 "num_base_bdevs_discovered": 1, 00:26:34.244 "num_base_bdevs_operational": 1, 00:26:34.244 "base_bdevs_list": [ 00:26:34.244 { 00:26:34.244 "name": null, 00:26:34.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.244 "is_configured": false, 00:26:34.244 "data_offset": 256, 00:26:34.244 "data_size": 7936 00:26:34.244 }, 00:26:34.244 { 00:26:34.244 "name": "BaseBdev2", 00:26:34.244 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:34.244 "is_configured": true, 00:26:34.244 "data_offset": 256, 00:26:34.244 "data_size": 7936 00:26:34.244 } 00:26:34.244 ] 00:26:34.244 }' 00:26:34.244 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.503 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:34.503 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.503 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:34.503 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:34.761 [2024-07-25 12:08:20.645373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:34.762 [2024-07-25 12:08:20.647594] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1764ef0 00:26:34.762 [2024-07-25 12:08:20.648945] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:34.762 12:08:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.707 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.967 "name": "raid_bdev1", 00:26:35.967 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:35.967 "strip_size_kb": 0, 00:26:35.967 "state": "online", 00:26:35.967 "raid_level": "raid1", 00:26:35.967 "superblock": true, 00:26:35.967 "num_base_bdevs": 2, 00:26:35.967 "num_base_bdevs_discovered": 2, 00:26:35.967 "num_base_bdevs_operational": 2, 00:26:35.967 "process": { 00:26:35.967 "type": "rebuild", 00:26:35.967 "target": "spare", 00:26:35.967 "progress": { 00:26:35.967 "blocks": 3072, 00:26:35.967 "percent": 38 00:26:35.967 } 00:26:35.967 }, 00:26:35.967 "base_bdevs_list": [ 00:26:35.967 { 00:26:35.967 "name": "spare", 00:26:35.967 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:35.967 "is_configured": true, 00:26:35.967 "data_offset": 256, 00:26:35.967 "data_size": 7936 00:26:35.967 }, 00:26:35.967 { 00:26:35.967 "name": "BaseBdev2", 00:26:35.967 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:35.967 "is_configured": true, 00:26:35.967 "data_offset": 256, 00:26:35.967 "data_size": 7936 00:26:35.967 } 00:26:35.967 ] 00:26:35.967 }' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:35.967 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1006 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.967 12:08:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.226 12:08:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.226 "name": "raid_bdev1", 00:26:36.226 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:36.226 "strip_size_kb": 0, 00:26:36.226 "state": "online", 00:26:36.226 "raid_level": "raid1", 00:26:36.226 "superblock": true, 00:26:36.226 "num_base_bdevs": 2, 00:26:36.226 "num_base_bdevs_discovered": 2, 00:26:36.226 "num_base_bdevs_operational": 2, 00:26:36.226 "process": { 00:26:36.226 "type": "rebuild", 00:26:36.226 "target": "spare", 00:26:36.226 "progress": { 00:26:36.226 "blocks": 3840, 00:26:36.226 "percent": 48 00:26:36.226 } 00:26:36.226 }, 00:26:36.226 "base_bdevs_list": [ 00:26:36.226 { 00:26:36.226 "name": "spare", 00:26:36.226 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:36.226 "is_configured": true, 00:26:36.226 "data_offset": 256, 00:26:36.226 "data_size": 7936 00:26:36.226 }, 00:26:36.226 { 00:26:36.226 "name": "BaseBdev2", 00:26:36.226 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:36.226 "is_configured": true, 00:26:36.226 "data_offset": 256, 00:26:36.226 "data_size": 7936 00:26:36.226 } 00:26:36.226 ] 00:26:36.226 }' 00:26:36.226 12:08:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:36.226 12:08:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:36.227 12:08:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:36.227 12:08:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:36.227 12:08:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.603 "name": "raid_bdev1", 00:26:37.603 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:37.603 "strip_size_kb": 0, 00:26:37.603 "state": "online", 00:26:37.603 "raid_level": "raid1", 00:26:37.603 "superblock": true, 00:26:37.603 "num_base_bdevs": 2, 00:26:37.603 "num_base_bdevs_discovered": 2, 00:26:37.603 "num_base_bdevs_operational": 2, 00:26:37.603 "process": { 00:26:37.603 "type": "rebuild", 00:26:37.603 "target": "spare", 00:26:37.603 "progress": { 00:26:37.603 "blocks": 7168, 00:26:37.603 "percent": 90 00:26:37.603 } 00:26:37.603 }, 00:26:37.603 "base_bdevs_list": [ 00:26:37.603 { 00:26:37.603 "name": "spare", 00:26:37.603 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:37.603 "is_configured": true, 00:26:37.603 "data_offset": 256, 00:26:37.603 "data_size": 7936 00:26:37.603 }, 00:26:37.603 { 00:26:37.603 "name": "BaseBdev2", 00:26:37.603 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:37.603 "is_configured": true, 00:26:37.603 "data_offset": 256, 00:26:37.603 "data_size": 7936 00:26:37.603 } 00:26:37.603 ] 00:26:37.603 }' 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.603 12:08:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:37.861 [2024-07-25 12:08:23.771744] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:37.861 [2024-07-25 12:08:23.771801] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:37.861 [2024-07-25 12:08:23.771876] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:38.797 "name": "raid_bdev1", 00:26:38.797 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:38.797 "strip_size_kb": 0, 00:26:38.797 "state": "online", 00:26:38.797 "raid_level": "raid1", 00:26:38.797 "superblock": true, 00:26:38.797 "num_base_bdevs": 2, 00:26:38.797 "num_base_bdevs_discovered": 2, 00:26:38.797 "num_base_bdevs_operational": 2, 00:26:38.797 "base_bdevs_list": [ 00:26:38.797 { 00:26:38.797 "name": "spare", 00:26:38.797 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:38.797 "is_configured": true, 00:26:38.797 "data_offset": 256, 00:26:38.797 "data_size": 7936 00:26:38.797 }, 00:26:38.797 { 00:26:38.797 "name": "BaseBdev2", 00:26:38.797 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:38.797 "is_configured": true, 00:26:38.797 "data_offset": 256, 00:26:38.797 "data_size": 7936 00:26:38.797 } 00:26:38.797 ] 00:26:38.797 }' 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:38.797 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:39.056 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:39.056 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:26:39.056 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:39.057 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:39.057 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:39.057 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:39.057 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:39.057 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.057 12:08:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.057 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:39.057 "name": "raid_bdev1", 00:26:39.057 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:39.057 "strip_size_kb": 0, 00:26:39.057 "state": "online", 00:26:39.057 "raid_level": "raid1", 00:26:39.057 "superblock": true, 00:26:39.057 "num_base_bdevs": 2, 00:26:39.057 "num_base_bdevs_discovered": 2, 00:26:39.057 "num_base_bdevs_operational": 2, 00:26:39.057 "base_bdevs_list": [ 00:26:39.057 { 00:26:39.057 "name": "spare", 00:26:39.057 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:39.057 "is_configured": true, 00:26:39.057 "data_offset": 256, 00:26:39.057 "data_size": 7936 00:26:39.057 }, 00:26:39.057 { 00:26:39.057 "name": "BaseBdev2", 00:26:39.057 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:39.057 "is_configured": true, 00:26:39.057 "data_offset": 256, 00:26:39.057 "data_size": 7936 00:26:39.057 } 00:26:39.057 ] 00:26:39.057 }' 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.316 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.575 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.575 "name": "raid_bdev1", 00:26:39.575 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:39.575 "strip_size_kb": 0, 00:26:39.575 "state": "online", 00:26:39.575 "raid_level": "raid1", 00:26:39.575 "superblock": true, 00:26:39.575 "num_base_bdevs": 2, 00:26:39.575 "num_base_bdevs_discovered": 2, 00:26:39.575 "num_base_bdevs_operational": 2, 00:26:39.575 "base_bdevs_list": [ 00:26:39.575 { 00:26:39.575 "name": "spare", 00:26:39.575 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:39.575 "is_configured": true, 00:26:39.575 "data_offset": 256, 00:26:39.575 "data_size": 7936 00:26:39.575 }, 00:26:39.575 { 00:26:39.575 "name": "BaseBdev2", 00:26:39.575 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:39.575 "is_configured": true, 00:26:39.575 "data_offset": 256, 00:26:39.575 "data_size": 7936 00:26:39.575 } 00:26:39.575 ] 00:26:39.575 }' 00:26:39.575 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.575 12:08:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:40.142 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:40.400 [2024-07-25 12:08:26.278553] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:40.400 [2024-07-25 12:08:26.278577] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:40.400 [2024-07-25 12:08:26.278628] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:40.400 [2024-07-25 12:08:26.278681] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:40.400 [2024-07-25 12:08:26.278692] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1766fb0 name raid_bdev1, state offline 00:26:40.400 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.400 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:40.659 /dev/nbd0 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:40.659 1+0 records in 00:26:40.659 1+0 records out 00:26:40.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000122934 s, 33.3 MB/s 00:26:40.659 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:40.918 12:08:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:40.918 /dev/nbd1 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:40.918 1+0 records in 00:26:40.918 1+0 records out 00:26:40.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336999 s, 12.2 MB/s 00:26:40.918 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.177 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.436 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:41.695 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:41.954 12:08:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:41.954 [2024-07-25 12:08:28.046578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:41.954 [2024-07-25 12:08:28.046620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.954 [2024-07-25 12:08:28.046642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15ce2e0 00:26:41.954 [2024-07-25 12:08:28.046653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.954 [2024-07-25 12:08:28.048026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.954 [2024-07-25 12:08:28.048052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:41.954 [2024-07-25 12:08:28.048107] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:41.954 [2024-07-25 12:08:28.048132] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:41.954 [2024-07-25 12:08:28.048230] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:41.954 spare 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.954 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.213 [2024-07-25 12:08:28.148533] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x15ce980 00:26:42.213 [2024-07-25 12:08:28.148545] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:42.213 [2024-07-25 12:08:28.148604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1764ef0 00:26:42.213 [2024-07-25 12:08:28.148712] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15ce980 00:26:42.213 [2024-07-25 12:08:28.148721] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15ce980 00:26:42.213 [2024-07-25 12:08:28.148789] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.213 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.213 "name": "raid_bdev1", 00:26:42.213 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:42.213 "strip_size_kb": 0, 00:26:42.213 "state": "online", 00:26:42.213 "raid_level": "raid1", 00:26:42.213 "superblock": true, 00:26:42.213 "num_base_bdevs": 2, 00:26:42.213 "num_base_bdevs_discovered": 2, 00:26:42.213 "num_base_bdevs_operational": 2, 00:26:42.214 "base_bdevs_list": [ 00:26:42.214 { 00:26:42.214 "name": "spare", 00:26:42.214 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:42.214 "is_configured": true, 00:26:42.214 "data_offset": 256, 00:26:42.214 "data_size": 7936 00:26:42.214 }, 00:26:42.214 { 00:26:42.214 "name": "BaseBdev2", 00:26:42.214 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:42.214 "is_configured": true, 00:26:42.214 "data_offset": 256, 00:26:42.214 "data_size": 7936 00:26:42.214 } 00:26:42.214 ] 00:26:42.214 }' 00:26:42.214 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.214 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.780 12:08:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.038 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:43.038 "name": "raid_bdev1", 00:26:43.038 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:43.038 "strip_size_kb": 0, 00:26:43.038 "state": "online", 00:26:43.038 "raid_level": "raid1", 00:26:43.038 "superblock": true, 00:26:43.038 "num_base_bdevs": 2, 00:26:43.038 "num_base_bdevs_discovered": 2, 00:26:43.038 "num_base_bdevs_operational": 2, 00:26:43.039 "base_bdevs_list": [ 00:26:43.039 { 00:26:43.039 "name": "spare", 00:26:43.039 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:43.039 "is_configured": true, 00:26:43.039 "data_offset": 256, 00:26:43.039 "data_size": 7936 00:26:43.039 }, 00:26:43.039 { 00:26:43.039 "name": "BaseBdev2", 00:26:43.039 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:43.039 "is_configured": true, 00:26:43.039 "data_offset": 256, 00:26:43.039 "data_size": 7936 00:26:43.039 } 00:26:43.039 ] 00:26:43.039 }' 00:26:43.039 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:43.039 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:43.039 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:43.039 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:43.039 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.039 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:43.297 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.297 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:43.556 [2024-07-25 12:08:29.530661] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.556 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.815 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.815 "name": "raid_bdev1", 00:26:43.815 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:43.815 "strip_size_kb": 0, 00:26:43.815 "state": "online", 00:26:43.815 "raid_level": "raid1", 00:26:43.815 "superblock": true, 00:26:43.815 "num_base_bdevs": 2, 00:26:43.815 "num_base_bdevs_discovered": 1, 00:26:43.815 "num_base_bdevs_operational": 1, 00:26:43.815 "base_bdevs_list": [ 00:26:43.815 { 00:26:43.815 "name": null, 00:26:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.815 "is_configured": false, 00:26:43.815 "data_offset": 256, 00:26:43.815 "data_size": 7936 00:26:43.815 }, 00:26:43.815 { 00:26:43.815 "name": "BaseBdev2", 00:26:43.815 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:43.815 "is_configured": true, 00:26:43.815 "data_offset": 256, 00:26:43.815 "data_size": 7936 00:26:43.815 } 00:26:43.815 ] 00:26:43.815 }' 00:26:43.815 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.815 12:08:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:44.382 12:08:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:44.640 [2024-07-25 12:08:30.541326] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:44.640 [2024-07-25 12:08:30.541462] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:44.640 [2024-07-25 12:08:30.541478] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:44.640 [2024-07-25 12:08:30.541503] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:44.640 [2024-07-25 12:08:30.543611] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1764ef0 00:26:44.640 [2024-07-25 12:08:30.545759] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:44.640 12:08:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.575 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.834 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:45.834 "name": "raid_bdev1", 00:26:45.834 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:45.834 "strip_size_kb": 0, 00:26:45.835 "state": "online", 00:26:45.835 "raid_level": "raid1", 00:26:45.835 "superblock": true, 00:26:45.835 "num_base_bdevs": 2, 00:26:45.835 "num_base_bdevs_discovered": 2, 00:26:45.835 "num_base_bdevs_operational": 2, 00:26:45.835 "process": { 00:26:45.835 "type": "rebuild", 00:26:45.835 "target": "spare", 00:26:45.835 "progress": { 00:26:45.835 "blocks": 3072, 00:26:45.835 "percent": 38 00:26:45.835 } 00:26:45.835 }, 00:26:45.835 "base_bdevs_list": [ 00:26:45.835 { 00:26:45.835 "name": "spare", 00:26:45.835 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:45.835 "is_configured": true, 00:26:45.835 "data_offset": 256, 00:26:45.835 "data_size": 7936 00:26:45.835 }, 00:26:45.835 { 00:26:45.835 "name": "BaseBdev2", 00:26:45.835 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:45.835 "is_configured": true, 00:26:45.835 "data_offset": 256, 00:26:45.835 "data_size": 7936 00:26:45.835 } 00:26:45.835 ] 00:26:45.835 }' 00:26:45.835 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:45.835 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:45.835 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:45.835 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:45.835 12:08:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:46.093 [2024-07-25 12:08:32.098475] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:46.093 [2024-07-25 12:08:32.157636] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:46.093 [2024-07-25 12:08:32.157675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.093 [2024-07-25 12:08:32.157689] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:46.093 [2024-07-25 12:08:32.157696] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.093 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.351 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.351 "name": "raid_bdev1", 00:26:46.351 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:46.351 "strip_size_kb": 0, 00:26:46.351 "state": "online", 00:26:46.351 "raid_level": "raid1", 00:26:46.351 "superblock": true, 00:26:46.351 "num_base_bdevs": 2, 00:26:46.351 "num_base_bdevs_discovered": 1, 00:26:46.351 "num_base_bdevs_operational": 1, 00:26:46.351 "base_bdevs_list": [ 00:26:46.351 { 00:26:46.351 "name": null, 00:26:46.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.351 "is_configured": false, 00:26:46.351 "data_offset": 256, 00:26:46.351 "data_size": 7936 00:26:46.351 }, 00:26:46.351 { 00:26:46.351 "name": "BaseBdev2", 00:26:46.351 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:46.351 "is_configured": true, 00:26:46.351 "data_offset": 256, 00:26:46.351 "data_size": 7936 00:26:46.351 } 00:26:46.351 ] 00:26:46.351 }' 00:26:46.351 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.351 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:46.918 12:08:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:47.176 [2024-07-25 12:08:33.187274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:47.176 [2024-07-25 12:08:33.187315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.176 [2024-07-25 12:08:33.187334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15cec00 00:26:47.176 [2024-07-25 12:08:33.187345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.176 [2024-07-25 12:08:33.187539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.177 [2024-07-25 12:08:33.187556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:47.177 [2024-07-25 12:08:33.187606] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:47.177 [2024-07-25 12:08:33.187618] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:47.177 [2024-07-25 12:08:33.187628] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:47.177 [2024-07-25 12:08:33.187645] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:47.177 [2024-07-25 12:08:33.189722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x176a340 00:26:47.177 [2024-07-25 12:08:33.191090] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:47.177 spare 00:26:47.177 12:08:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.116 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.411 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.411 "name": "raid_bdev1", 00:26:48.411 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:48.411 "strip_size_kb": 0, 00:26:48.411 "state": "online", 00:26:48.411 "raid_level": "raid1", 00:26:48.411 "superblock": true, 00:26:48.411 "num_base_bdevs": 2, 00:26:48.411 "num_base_bdevs_discovered": 2, 00:26:48.411 "num_base_bdevs_operational": 2, 00:26:48.411 "process": { 00:26:48.411 "type": "rebuild", 00:26:48.411 "target": "spare", 00:26:48.411 "progress": { 00:26:48.411 "blocks": 3072, 00:26:48.411 "percent": 38 00:26:48.411 } 00:26:48.411 }, 00:26:48.411 "base_bdevs_list": [ 00:26:48.411 { 00:26:48.411 "name": "spare", 00:26:48.411 "uuid": "62ac494c-660c-56e4-a186-944cefff7fe7", 00:26:48.411 "is_configured": true, 00:26:48.411 "data_offset": 256, 00:26:48.411 "data_size": 7936 00:26:48.411 }, 00:26:48.411 { 00:26:48.411 "name": "BaseBdev2", 00:26:48.412 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:48.412 "is_configured": true, 00:26:48.412 "data_offset": 256, 00:26:48.412 "data_size": 7936 00:26:48.412 } 00:26:48.412 ] 00:26:48.412 }' 00:26:48.412 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.412 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.670 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.670 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.670 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:48.670 [2024-07-25 12:08:34.744647] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:48.928 [2024-07-25 12:08:34.802913] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:48.928 [2024-07-25 12:08:34.802962] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.928 [2024-07-25 12:08:34.802977] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:48.928 [2024-07-25 12:08:34.802984] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.928 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.929 12:08:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.187 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.187 "name": "raid_bdev1", 00:26:49.187 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:49.187 "strip_size_kb": 0, 00:26:49.187 "state": "online", 00:26:49.187 "raid_level": "raid1", 00:26:49.187 "superblock": true, 00:26:49.187 "num_base_bdevs": 2, 00:26:49.187 "num_base_bdevs_discovered": 1, 00:26:49.187 "num_base_bdevs_operational": 1, 00:26:49.187 "base_bdevs_list": [ 00:26:49.187 { 00:26:49.187 "name": null, 00:26:49.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.187 "is_configured": false, 00:26:49.187 "data_offset": 256, 00:26:49.187 "data_size": 7936 00:26:49.187 }, 00:26:49.187 { 00:26:49.187 "name": "BaseBdev2", 00:26:49.187 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:49.187 "is_configured": true, 00:26:49.187 "data_offset": 256, 00:26:49.187 "data_size": 7936 00:26:49.187 } 00:26:49.187 ] 00:26:49.187 }' 00:26:49.187 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.187 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.754 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.754 "name": "raid_bdev1", 00:26:49.754 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:49.754 "strip_size_kb": 0, 00:26:49.754 "state": "online", 00:26:49.754 "raid_level": "raid1", 00:26:49.754 "superblock": true, 00:26:49.754 "num_base_bdevs": 2, 00:26:49.754 "num_base_bdevs_discovered": 1, 00:26:49.754 "num_base_bdevs_operational": 1, 00:26:49.754 "base_bdevs_list": [ 00:26:49.754 { 00:26:49.754 "name": null, 00:26:49.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.754 "is_configured": false, 00:26:49.754 "data_offset": 256, 00:26:49.754 "data_size": 7936 00:26:49.754 }, 00:26:49.754 { 00:26:49.755 "name": "BaseBdev2", 00:26:49.755 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:49.755 "is_configured": true, 00:26:49.755 "data_offset": 256, 00:26:49.755 "data_size": 7936 00:26:49.755 } 00:26:49.755 ] 00:26:49.755 }' 00:26:49.755 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.013 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:50.013 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.013 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.013 12:08:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:50.271 12:08:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:50.271 [2024-07-25 12:08:36.349495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:50.272 [2024-07-25 12:08:36.349536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.272 [2024-07-25 12:08:36.349558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15cf930 00:26:50.272 [2024-07-25 12:08:36.349570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.272 [2024-07-25 12:08:36.349738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.272 [2024-07-25 12:08:36.349754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:50.272 [2024-07-25 12:08:36.349796] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:50.272 [2024-07-25 12:08:36.349808] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:50.272 [2024-07-25 12:08:36.349817] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:50.272 BaseBdev1 00:26:50.272 12:08:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.647 "name": "raid_bdev1", 00:26:51.647 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:51.647 "strip_size_kb": 0, 00:26:51.647 "state": "online", 00:26:51.647 "raid_level": "raid1", 00:26:51.647 "superblock": true, 00:26:51.647 "num_base_bdevs": 2, 00:26:51.647 "num_base_bdevs_discovered": 1, 00:26:51.647 "num_base_bdevs_operational": 1, 00:26:51.647 "base_bdevs_list": [ 00:26:51.647 { 00:26:51.647 "name": null, 00:26:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.647 "is_configured": false, 00:26:51.647 "data_offset": 256, 00:26:51.647 "data_size": 7936 00:26:51.647 }, 00:26:51.647 { 00:26:51.647 "name": "BaseBdev2", 00:26:51.647 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:51.647 "is_configured": true, 00:26:51.647 "data_offset": 256, 00:26:51.647 "data_size": 7936 00:26:51.647 } 00:26:51.647 ] 00:26:51.647 }' 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.647 12:08:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.214 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.473 "name": "raid_bdev1", 00:26:52.473 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:52.473 "strip_size_kb": 0, 00:26:52.473 "state": "online", 00:26:52.473 "raid_level": "raid1", 00:26:52.473 "superblock": true, 00:26:52.473 "num_base_bdevs": 2, 00:26:52.473 "num_base_bdevs_discovered": 1, 00:26:52.473 "num_base_bdevs_operational": 1, 00:26:52.473 "base_bdevs_list": [ 00:26:52.473 { 00:26:52.473 "name": null, 00:26:52.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.473 "is_configured": false, 00:26:52.473 "data_offset": 256, 00:26:52.473 "data_size": 7936 00:26:52.473 }, 00:26:52.473 { 00:26:52.473 "name": "BaseBdev2", 00:26:52.473 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:52.473 "is_configured": true, 00:26:52.473 "data_offset": 256, 00:26:52.473 "data_size": 7936 00:26:52.473 } 00:26:52.473 ] 00:26:52.473 }' 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:52.473 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:52.732 [2024-07-25 12:08:38.703737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:52.732 [2024-07-25 12:08:38.703847] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:52.732 [2024-07-25 12:08:38.703862] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:52.732 request: 00:26:52.732 { 00:26:52.732 "base_bdev": "BaseBdev1", 00:26:52.732 "raid_bdev": "raid_bdev1", 00:26:52.732 "method": "bdev_raid_add_base_bdev", 00:26:52.732 "req_id": 1 00:26:52.732 } 00:26:52.732 Got JSON-RPC error response 00:26:52.732 response: 00:26:52.732 { 00:26:52.732 "code": -22, 00:26:52.732 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:52.732 } 00:26:52.732 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:26:52.732 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:52.732 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:52.732 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:52.732 12:08:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.696 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.955 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.955 "name": "raid_bdev1", 00:26:53.955 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:53.955 "strip_size_kb": 0, 00:26:53.955 "state": "online", 00:26:53.955 "raid_level": "raid1", 00:26:53.955 "superblock": true, 00:26:53.955 "num_base_bdevs": 2, 00:26:53.955 "num_base_bdevs_discovered": 1, 00:26:53.955 "num_base_bdevs_operational": 1, 00:26:53.955 "base_bdevs_list": [ 00:26:53.955 { 00:26:53.955 "name": null, 00:26:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.955 "is_configured": false, 00:26:53.955 "data_offset": 256, 00:26:53.955 "data_size": 7936 00:26:53.955 }, 00:26:53.955 { 00:26:53.955 "name": "BaseBdev2", 00:26:53.955 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:53.955 "is_configured": true, 00:26:53.955 "data_offset": 256, 00:26:53.955 "data_size": 7936 00:26:53.955 } 00:26:53.955 ] 00:26:53.955 }' 00:26:53.955 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.955 12:08:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.522 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.781 "name": "raid_bdev1", 00:26:54.781 "uuid": "d78b4f22-73f4-42ed-9a6f-cc2aceb97385", 00:26:54.781 "strip_size_kb": 0, 00:26:54.781 "state": "online", 00:26:54.781 "raid_level": "raid1", 00:26:54.781 "superblock": true, 00:26:54.781 "num_base_bdevs": 2, 00:26:54.781 "num_base_bdevs_discovered": 1, 00:26:54.781 "num_base_bdevs_operational": 1, 00:26:54.781 "base_bdevs_list": [ 00:26:54.781 { 00:26:54.781 "name": null, 00:26:54.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.781 "is_configured": false, 00:26:54.781 "data_offset": 256, 00:26:54.781 "data_size": 7936 00:26:54.781 }, 00:26:54.781 { 00:26:54.781 "name": "BaseBdev2", 00:26:54.781 "uuid": "62f2467c-56c3-564b-9866-6bb4325c4f91", 00:26:54.781 "is_configured": true, 00:26:54.781 "data_offset": 256, 00:26:54.781 "data_size": 7936 00:26:54.781 } 00:26:54.781 ] 00:26:54.781 }' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 78654 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 78654 ']' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 78654 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78654 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78654' 00:26:54.781 killing process with pid 78654 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 78654 00:26:54.781 Received shutdown signal, test time was about 60.000000 seconds 00:26:54.781 00:26:54.781 Latency(us) 00:26:54.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.781 =================================================================================================================== 00:26:54.781 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:54.781 [2024-07-25 12:08:40.881595] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:54.781 [2024-07-25 12:08:40.881671] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.781 [2024-07-25 12:08:40.881711] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.781 [2024-07-25 12:08:40.881723] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ce980 name raid_bdev1, state offline 00:26:54.781 12:08:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 78654 00:26:55.041 [2024-07-25 12:08:40.911560] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:55.041 12:08:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:26:55.041 00:26:55.041 real 0m29.935s 00:26:55.041 user 0m46.332s 00:26:55.041 sys 0m4.810s 00:26:55.041 12:08:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:55.041 12:08:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:55.041 ************************************ 00:26:55.041 END TEST raid_rebuild_test_sb_md_separate 00:26:55.041 ************************************ 00:26:55.041 12:08:41 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:26:55.041 12:08:41 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:26:55.041 12:08:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:55.041 12:08:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.041 12:08:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:55.301 ************************************ 00:26:55.301 START TEST raid_state_function_test_sb_md_interleaved 00:26:55.301 ************************************ 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=84088 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 84088' 00:26:55.301 Process raid pid: 84088 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 84088 /var/tmp/spdk-raid.sock 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 84088 ']' 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:55.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.301 12:08:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:55.301 [2024-07-25 12:08:41.248931] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:55.301 [2024-07-25 12:08:41.248989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.301 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:55.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:55.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.302 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:55.302 [2024-07-25 12:08:41.382645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.561 [2024-07-25 12:08:41.469673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.561 [2024-07-25 12:08:41.531749] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:55.561 [2024-07-25 12:08:41.531784] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:56.128 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.128 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:26:56.128 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:56.386 [2024-07-25 12:08:42.359528] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:56.387 [2024-07-25 12:08:42.359562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:56.387 [2024-07-25 12:08:42.359575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:56.387 [2024-07-25 12:08:42.359586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.387 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.645 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.645 "name": "Existed_Raid", 00:26:56.645 "uuid": "7a74f12d-f668-4190-bcad-1e523a1a5482", 00:26:56.645 "strip_size_kb": 0, 00:26:56.645 "state": "configuring", 00:26:56.645 "raid_level": "raid1", 00:26:56.645 "superblock": true, 00:26:56.645 "num_base_bdevs": 2, 00:26:56.645 "num_base_bdevs_discovered": 0, 00:26:56.645 "num_base_bdevs_operational": 2, 00:26:56.645 "base_bdevs_list": [ 00:26:56.645 { 00:26:56.645 "name": "BaseBdev1", 00:26:56.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.645 "is_configured": false, 00:26:56.645 "data_offset": 0, 00:26:56.645 "data_size": 0 00:26:56.645 }, 00:26:56.645 { 00:26:56.645 "name": "BaseBdev2", 00:26:56.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.645 "is_configured": false, 00:26:56.645 "data_offset": 0, 00:26:56.645 "data_size": 0 00:26:56.645 } 00:26:56.645 ] 00:26:56.645 }' 00:26:56.645 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.645 12:08:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:57.214 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:57.472 [2024-07-25 12:08:43.341995] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:57.472 [2024-07-25 12:08:43.342020] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x163ef20 name Existed_Raid, state configuring 00:26:57.472 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:57.472 [2024-07-25 12:08:43.562585] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:57.472 [2024-07-25 12:08:43.562610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:57.472 [2024-07-25 12:08:43.562619] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:57.472 [2024-07-25 12:08:43.562630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:57.472 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:26:57.730 [2024-07-25 12:08:43.801048] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:57.730 BaseBdev1 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:57.730 12:08:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:57.989 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:58.247 [ 00:26:58.247 { 00:26:58.247 "name": "BaseBdev1", 00:26:58.247 "aliases": [ 00:26:58.247 "57c70a10-2deb-464e-8959-922d06d27493" 00:26:58.247 ], 00:26:58.247 "product_name": "Malloc disk", 00:26:58.247 "block_size": 4128, 00:26:58.247 "num_blocks": 8192, 00:26:58.247 "uuid": "57c70a10-2deb-464e-8959-922d06d27493", 00:26:58.247 "md_size": 32, 00:26:58.247 "md_interleave": true, 00:26:58.247 "dif_type": 0, 00:26:58.247 "assigned_rate_limits": { 00:26:58.247 "rw_ios_per_sec": 0, 00:26:58.247 "rw_mbytes_per_sec": 0, 00:26:58.247 "r_mbytes_per_sec": 0, 00:26:58.247 "w_mbytes_per_sec": 0 00:26:58.247 }, 00:26:58.247 "claimed": true, 00:26:58.247 "claim_type": "exclusive_write", 00:26:58.247 "zoned": false, 00:26:58.247 "supported_io_types": { 00:26:58.247 "read": true, 00:26:58.247 "write": true, 00:26:58.247 "unmap": true, 00:26:58.247 "flush": true, 00:26:58.247 "reset": true, 00:26:58.247 "nvme_admin": false, 00:26:58.247 "nvme_io": false, 00:26:58.247 "nvme_io_md": false, 00:26:58.247 "write_zeroes": true, 00:26:58.247 "zcopy": true, 00:26:58.247 "get_zone_info": false, 00:26:58.247 "zone_management": false, 00:26:58.247 "zone_append": false, 00:26:58.247 "compare": false, 00:26:58.247 "compare_and_write": false, 00:26:58.247 "abort": true, 00:26:58.247 "seek_hole": false, 00:26:58.247 "seek_data": false, 00:26:58.247 "copy": true, 00:26:58.247 "nvme_iov_md": false 00:26:58.247 }, 00:26:58.247 "memory_domains": [ 00:26:58.247 { 00:26:58.247 "dma_device_id": "system", 00:26:58.247 "dma_device_type": 1 00:26:58.247 }, 00:26:58.247 { 00:26:58.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.247 "dma_device_type": 2 00:26:58.247 } 00:26:58.247 ], 00:26:58.247 "driver_specific": {} 00:26:58.247 } 00:26:58.247 ] 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.247 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.506 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:58.506 "name": "Existed_Raid", 00:26:58.506 "uuid": "9f2af988-1890-45ce-b7f2-5acf0816555d", 00:26:58.506 "strip_size_kb": 0, 00:26:58.506 "state": "configuring", 00:26:58.506 "raid_level": "raid1", 00:26:58.506 "superblock": true, 00:26:58.506 "num_base_bdevs": 2, 00:26:58.506 "num_base_bdevs_discovered": 1, 00:26:58.506 "num_base_bdevs_operational": 2, 00:26:58.506 "base_bdevs_list": [ 00:26:58.506 { 00:26:58.506 "name": "BaseBdev1", 00:26:58.506 "uuid": "57c70a10-2deb-464e-8959-922d06d27493", 00:26:58.506 "is_configured": true, 00:26:58.506 "data_offset": 256, 00:26:58.506 "data_size": 7936 00:26:58.506 }, 00:26:58.506 { 00:26:58.506 "name": "BaseBdev2", 00:26:58.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.506 "is_configured": false, 00:26:58.506 "data_offset": 0, 00:26:58.506 "data_size": 0 00:26:58.506 } 00:26:58.506 ] 00:26:58.506 }' 00:26:58.506 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:58.506 12:08:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:59.072 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:59.330 [2024-07-25 12:08:45.276982] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:59.330 [2024-07-25 12:08:45.277018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x163e810 name Existed_Raid, state configuring 00:26:59.330 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:59.589 [2024-07-25 12:08:45.505623] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:59.589 [2024-07-25 12:08:45.507000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:59.589 [2024-07-25 12:08:45.507029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.589 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.848 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.848 "name": "Existed_Raid", 00:26:59.848 "uuid": "4b9a42e9-d2f0-4f74-aa6f-9c2b40a8c24c", 00:26:59.848 "strip_size_kb": 0, 00:26:59.848 "state": "configuring", 00:26:59.848 "raid_level": "raid1", 00:26:59.848 "superblock": true, 00:26:59.848 "num_base_bdevs": 2, 00:26:59.848 "num_base_bdevs_discovered": 1, 00:26:59.848 "num_base_bdevs_operational": 2, 00:26:59.848 "base_bdevs_list": [ 00:26:59.848 { 00:26:59.848 "name": "BaseBdev1", 00:26:59.848 "uuid": "57c70a10-2deb-464e-8959-922d06d27493", 00:26:59.848 "is_configured": true, 00:26:59.848 "data_offset": 256, 00:26:59.848 "data_size": 7936 00:26:59.848 }, 00:26:59.848 { 00:26:59.848 "name": "BaseBdev2", 00:26:59.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.848 "is_configured": false, 00:26:59.848 "data_offset": 0, 00:26:59.848 "data_size": 0 00:26:59.848 } 00:26:59.848 ] 00:26:59.848 }' 00:26:59.848 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.848 12:08:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:00.416 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:27:00.675 [2024-07-25 12:08:46.535690] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:00.675 [2024-07-25 12:08:46.535808] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16c0710 00:27:00.675 [2024-07-25 12:08:46.535820] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:00.675 [2024-07-25 12:08:46.535875] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17d1f60 00:27:00.675 [2024-07-25 12:08:46.535948] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16c0710 00:27:00.675 [2024-07-25 12:08:46.535957] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16c0710 00:27:00.675 [2024-07-25 12:08:46.536009] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.675 BaseBdev2 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:00.675 12:08:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:00.934 [ 00:27:00.934 { 00:27:00.934 "name": "BaseBdev2", 00:27:00.934 "aliases": [ 00:27:00.934 "d6617b36-1d6a-4aa9-a0f2-15093b11f799" 00:27:00.934 ], 00:27:00.934 "product_name": "Malloc disk", 00:27:00.934 "block_size": 4128, 00:27:00.934 "num_blocks": 8192, 00:27:00.934 "uuid": "d6617b36-1d6a-4aa9-a0f2-15093b11f799", 00:27:00.934 "md_size": 32, 00:27:00.934 "md_interleave": true, 00:27:00.934 "dif_type": 0, 00:27:00.934 "assigned_rate_limits": { 00:27:00.934 "rw_ios_per_sec": 0, 00:27:00.934 "rw_mbytes_per_sec": 0, 00:27:00.934 "r_mbytes_per_sec": 0, 00:27:00.934 "w_mbytes_per_sec": 0 00:27:00.934 }, 00:27:00.934 "claimed": true, 00:27:00.934 "claim_type": "exclusive_write", 00:27:00.934 "zoned": false, 00:27:00.934 "supported_io_types": { 00:27:00.934 "read": true, 00:27:00.934 "write": true, 00:27:00.934 "unmap": true, 00:27:00.934 "flush": true, 00:27:00.934 "reset": true, 00:27:00.934 "nvme_admin": false, 00:27:00.934 "nvme_io": false, 00:27:00.934 "nvme_io_md": false, 00:27:00.934 "write_zeroes": true, 00:27:00.934 "zcopy": true, 00:27:00.934 "get_zone_info": false, 00:27:00.934 "zone_management": false, 00:27:00.934 "zone_append": false, 00:27:00.934 "compare": false, 00:27:00.934 "compare_and_write": false, 00:27:00.934 "abort": true, 00:27:00.934 "seek_hole": false, 00:27:00.934 "seek_data": false, 00:27:00.934 "copy": true, 00:27:00.934 "nvme_iov_md": false 00:27:00.934 }, 00:27:00.934 "memory_domains": [ 00:27:00.934 { 00:27:00.934 "dma_device_id": "system", 00:27:00.934 "dma_device_type": 1 00:27:00.934 }, 00:27:00.934 { 00:27:00.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.934 "dma_device_type": 2 00:27:00.934 } 00:27:00.934 ], 00:27:00.934 "driver_specific": {} 00:27:00.934 } 00:27:00.934 ] 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.934 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.192 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.192 "name": "Existed_Raid", 00:27:01.192 "uuid": "4b9a42e9-d2f0-4f74-aa6f-9c2b40a8c24c", 00:27:01.192 "strip_size_kb": 0, 00:27:01.192 "state": "online", 00:27:01.192 "raid_level": "raid1", 00:27:01.192 "superblock": true, 00:27:01.192 "num_base_bdevs": 2, 00:27:01.192 "num_base_bdevs_discovered": 2, 00:27:01.192 "num_base_bdevs_operational": 2, 00:27:01.192 "base_bdevs_list": [ 00:27:01.192 { 00:27:01.192 "name": "BaseBdev1", 00:27:01.192 "uuid": "57c70a10-2deb-464e-8959-922d06d27493", 00:27:01.192 "is_configured": true, 00:27:01.192 "data_offset": 256, 00:27:01.192 "data_size": 7936 00:27:01.192 }, 00:27:01.192 { 00:27:01.192 "name": "BaseBdev2", 00:27:01.192 "uuid": "d6617b36-1d6a-4aa9-a0f2-15093b11f799", 00:27:01.192 "is_configured": true, 00:27:01.192 "data_offset": 256, 00:27:01.192 "data_size": 7936 00:27:01.192 } 00:27:01.192 ] 00:27:01.192 }' 00:27:01.192 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.192 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:01.759 12:08:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:02.018 [2024-07-25 12:08:47.995792] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:02.018 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:02.018 "name": "Existed_Raid", 00:27:02.018 "aliases": [ 00:27:02.018 "4b9a42e9-d2f0-4f74-aa6f-9c2b40a8c24c" 00:27:02.018 ], 00:27:02.018 "product_name": "Raid Volume", 00:27:02.018 "block_size": 4128, 00:27:02.018 "num_blocks": 7936, 00:27:02.018 "uuid": "4b9a42e9-d2f0-4f74-aa6f-9c2b40a8c24c", 00:27:02.018 "md_size": 32, 00:27:02.018 "md_interleave": true, 00:27:02.018 "dif_type": 0, 00:27:02.018 "assigned_rate_limits": { 00:27:02.018 "rw_ios_per_sec": 0, 00:27:02.018 "rw_mbytes_per_sec": 0, 00:27:02.018 "r_mbytes_per_sec": 0, 00:27:02.018 "w_mbytes_per_sec": 0 00:27:02.018 }, 00:27:02.018 "claimed": false, 00:27:02.018 "zoned": false, 00:27:02.018 "supported_io_types": { 00:27:02.018 "read": true, 00:27:02.018 "write": true, 00:27:02.018 "unmap": false, 00:27:02.018 "flush": false, 00:27:02.018 "reset": true, 00:27:02.018 "nvme_admin": false, 00:27:02.018 "nvme_io": false, 00:27:02.018 "nvme_io_md": false, 00:27:02.018 "write_zeroes": true, 00:27:02.018 "zcopy": false, 00:27:02.018 "get_zone_info": false, 00:27:02.018 "zone_management": false, 00:27:02.018 "zone_append": false, 00:27:02.018 "compare": false, 00:27:02.018 "compare_and_write": false, 00:27:02.018 "abort": false, 00:27:02.018 "seek_hole": false, 00:27:02.018 "seek_data": false, 00:27:02.018 "copy": false, 00:27:02.018 "nvme_iov_md": false 00:27:02.018 }, 00:27:02.018 "memory_domains": [ 00:27:02.018 { 00:27:02.018 "dma_device_id": "system", 00:27:02.018 "dma_device_type": 1 00:27:02.018 }, 00:27:02.018 { 00:27:02.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.018 "dma_device_type": 2 00:27:02.018 }, 00:27:02.018 { 00:27:02.018 "dma_device_id": "system", 00:27:02.018 "dma_device_type": 1 00:27:02.018 }, 00:27:02.018 { 00:27:02.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.018 "dma_device_type": 2 00:27:02.018 } 00:27:02.018 ], 00:27:02.018 "driver_specific": { 00:27:02.018 "raid": { 00:27:02.018 "uuid": "4b9a42e9-d2f0-4f74-aa6f-9c2b40a8c24c", 00:27:02.018 "strip_size_kb": 0, 00:27:02.018 "state": "online", 00:27:02.018 "raid_level": "raid1", 00:27:02.018 "superblock": true, 00:27:02.018 "num_base_bdevs": 2, 00:27:02.018 "num_base_bdevs_discovered": 2, 00:27:02.018 "num_base_bdevs_operational": 2, 00:27:02.018 "base_bdevs_list": [ 00:27:02.018 { 00:27:02.018 "name": "BaseBdev1", 00:27:02.018 "uuid": "57c70a10-2deb-464e-8959-922d06d27493", 00:27:02.018 "is_configured": true, 00:27:02.018 "data_offset": 256, 00:27:02.018 "data_size": 7936 00:27:02.018 }, 00:27:02.018 { 00:27:02.018 "name": "BaseBdev2", 00:27:02.018 "uuid": "d6617b36-1d6a-4aa9-a0f2-15093b11f799", 00:27:02.018 "is_configured": true, 00:27:02.018 "data_offset": 256, 00:27:02.018 "data_size": 7936 00:27:02.018 } 00:27:02.018 ] 00:27:02.018 } 00:27:02.018 } 00:27:02.018 }' 00:27:02.018 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:02.018 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:02.018 BaseBdev2' 00:27:02.018 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:02.018 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:02.018 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:02.278 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:02.278 "name": "BaseBdev1", 00:27:02.278 "aliases": [ 00:27:02.278 "57c70a10-2deb-464e-8959-922d06d27493" 00:27:02.278 ], 00:27:02.278 "product_name": "Malloc disk", 00:27:02.278 "block_size": 4128, 00:27:02.278 "num_blocks": 8192, 00:27:02.278 "uuid": "57c70a10-2deb-464e-8959-922d06d27493", 00:27:02.278 "md_size": 32, 00:27:02.278 "md_interleave": true, 00:27:02.278 "dif_type": 0, 00:27:02.278 "assigned_rate_limits": { 00:27:02.278 "rw_ios_per_sec": 0, 00:27:02.278 "rw_mbytes_per_sec": 0, 00:27:02.278 "r_mbytes_per_sec": 0, 00:27:02.278 "w_mbytes_per_sec": 0 00:27:02.278 }, 00:27:02.278 "claimed": true, 00:27:02.278 "claim_type": "exclusive_write", 00:27:02.278 "zoned": false, 00:27:02.278 "supported_io_types": { 00:27:02.278 "read": true, 00:27:02.278 "write": true, 00:27:02.278 "unmap": true, 00:27:02.278 "flush": true, 00:27:02.278 "reset": true, 00:27:02.278 "nvme_admin": false, 00:27:02.278 "nvme_io": false, 00:27:02.278 "nvme_io_md": false, 00:27:02.278 "write_zeroes": true, 00:27:02.278 "zcopy": true, 00:27:02.278 "get_zone_info": false, 00:27:02.278 "zone_management": false, 00:27:02.278 "zone_append": false, 00:27:02.278 "compare": false, 00:27:02.278 "compare_and_write": false, 00:27:02.278 "abort": true, 00:27:02.278 "seek_hole": false, 00:27:02.278 "seek_data": false, 00:27:02.278 "copy": true, 00:27:02.278 "nvme_iov_md": false 00:27:02.278 }, 00:27:02.278 "memory_domains": [ 00:27:02.278 { 00:27:02.278 "dma_device_id": "system", 00:27:02.278 "dma_device_type": 1 00:27:02.278 }, 00:27:02.278 { 00:27:02.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.278 "dma_device_type": 2 00:27:02.278 } 00:27:02.278 ], 00:27:02.278 "driver_specific": {} 00:27:02.278 }' 00:27:02.278 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:02.278 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:02.278 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:02.278 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:02.537 12:08:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.142 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.142 "name": "BaseBdev2", 00:27:03.142 "aliases": [ 00:27:03.142 "d6617b36-1d6a-4aa9-a0f2-15093b11f799" 00:27:03.142 ], 00:27:03.142 "product_name": "Malloc disk", 00:27:03.142 "block_size": 4128, 00:27:03.142 "num_blocks": 8192, 00:27:03.142 "uuid": "d6617b36-1d6a-4aa9-a0f2-15093b11f799", 00:27:03.142 "md_size": 32, 00:27:03.142 "md_interleave": true, 00:27:03.142 "dif_type": 0, 00:27:03.142 "assigned_rate_limits": { 00:27:03.142 "rw_ios_per_sec": 0, 00:27:03.142 "rw_mbytes_per_sec": 0, 00:27:03.142 "r_mbytes_per_sec": 0, 00:27:03.142 "w_mbytes_per_sec": 0 00:27:03.142 }, 00:27:03.142 "claimed": true, 00:27:03.142 "claim_type": "exclusive_write", 00:27:03.142 "zoned": false, 00:27:03.142 "supported_io_types": { 00:27:03.142 "read": true, 00:27:03.142 "write": true, 00:27:03.142 "unmap": true, 00:27:03.142 "flush": true, 00:27:03.142 "reset": true, 00:27:03.142 "nvme_admin": false, 00:27:03.142 "nvme_io": false, 00:27:03.142 "nvme_io_md": false, 00:27:03.142 "write_zeroes": true, 00:27:03.142 "zcopy": true, 00:27:03.142 "get_zone_info": false, 00:27:03.142 "zone_management": false, 00:27:03.142 "zone_append": false, 00:27:03.142 "compare": false, 00:27:03.142 "compare_and_write": false, 00:27:03.142 "abort": true, 00:27:03.142 "seek_hole": false, 00:27:03.142 "seek_data": false, 00:27:03.142 "copy": true, 00:27:03.142 "nvme_iov_md": false 00:27:03.142 }, 00:27:03.142 "memory_domains": [ 00:27:03.142 { 00:27:03.142 "dma_device_id": "system", 00:27:03.142 "dma_device_type": 1 00:27:03.142 }, 00:27:03.142 { 00:27:03.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.142 "dma_device_type": 2 00:27:03.142 } 00:27:03.142 ], 00:27:03.142 "driver_specific": {} 00:27:03.142 }' 00:27:03.142 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.142 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.142 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:03.142 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.142 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.427 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:03.428 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:03.687 [2024-07-25 12:08:49.611860] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.687 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.945 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.945 "name": "Existed_Raid", 00:27:03.945 "uuid": "4b9a42e9-d2f0-4f74-aa6f-9c2b40a8c24c", 00:27:03.945 "strip_size_kb": 0, 00:27:03.945 "state": "online", 00:27:03.945 "raid_level": "raid1", 00:27:03.945 "superblock": true, 00:27:03.945 "num_base_bdevs": 2, 00:27:03.945 "num_base_bdevs_discovered": 1, 00:27:03.945 "num_base_bdevs_operational": 1, 00:27:03.945 "base_bdevs_list": [ 00:27:03.945 { 00:27:03.945 "name": null, 00:27:03.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.945 "is_configured": false, 00:27:03.945 "data_offset": 256, 00:27:03.945 "data_size": 7936 00:27:03.945 }, 00:27:03.945 { 00:27:03.945 "name": "BaseBdev2", 00:27:03.945 "uuid": "d6617b36-1d6a-4aa9-a0f2-15093b11f799", 00:27:03.945 "is_configured": true, 00:27:03.945 "data_offset": 256, 00:27:03.945 "data_size": 7936 00:27:03.945 } 00:27:03.945 ] 00:27:03.945 }' 00:27:03.945 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.945 12:08:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:04.512 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:04.512 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:04.512 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.512 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:04.770 [2024-07-25 12:08:50.856152] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:04.770 [2024-07-25 12:08:50.856229] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:04.770 [2024-07-25 12:08:50.867064] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.770 [2024-07-25 12:08:50.867095] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.770 [2024-07-25 12:08:50.867106] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16c0710 name Existed_Raid, state offline 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.770 12:08:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 84088 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 84088 ']' 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 84088 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.027 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84088 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84088' 00:27:05.285 killing process with pid 84088 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 84088 00:27:05.285 [2024-07-25 12:08:51.189902] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 84088 00:27:05.285 [2024-07-25 12:08:51.190750] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:27:05.285 00:27:05.285 real 0m10.198s 00:27:05.285 user 0m18.117s 00:27:05.285 sys 0m1.922s 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.285 12:08:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:05.285 ************************************ 00:27:05.285 END TEST raid_state_function_test_sb_md_interleaved 00:27:05.285 ************************************ 00:27:05.543 12:08:51 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:27:05.543 12:08:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:05.543 12:08:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.543 12:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:05.543 ************************************ 00:27:05.543 START TEST raid_superblock_test_md_interleaved 00:27:05.543 ************************************ 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=86017 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 86017 /var/tmp/spdk-raid.sock 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 86017 ']' 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:05.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.543 12:08:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:05.543 [2024-07-25 12:08:51.511520] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:05.543 [2024-07-25 12:08:51.511575] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86017 ] 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:05.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.543 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:05.543 [2024-07-25 12:08:51.641023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.802 [2024-07-25 12:08:51.727433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.802 [2024-07-25 12:08:51.787572] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:05.802 [2024-07-25 12:08:51.787614] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:06.368 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:27:06.627 malloc1 00:27:06.627 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:06.885 [2024-07-25 12:08:52.852620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:06.885 [2024-07-25 12:08:52.852661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.885 [2024-07-25 12:08:52.852681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bf3310 00:27:06.885 [2024-07-25 12:08:52.852692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.885 [2024-07-25 12:08:52.854047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.885 [2024-07-25 12:08:52.854074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:06.885 pt1 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:06.885 12:08:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:27:07.144 malloc2 00:27:07.144 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:07.403 [2024-07-25 12:08:53.310664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:07.403 [2024-07-25 12:08:53.310703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.403 [2024-07-25 12:08:53.310720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bea950 00:27:07.403 [2024-07-25 12:08:53.310731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.403 [2024-07-25 12:08:53.311931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.403 [2024-07-25 12:08:53.311961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:07.403 pt2 00:27:07.403 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:07.403 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:07.403 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:07.662 [2024-07-25 12:08:53.539292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:07.662 [2024-07-25 12:08:53.540455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:07.662 [2024-07-25 12:08:53.540590] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bf3ae0 00:27:07.662 [2024-07-25 12:08:53.540603] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:07.662 [2024-07-25 12:08:53.540668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a55f50 00:27:07.662 [2024-07-25 12:08:53.540748] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bf3ae0 00:27:07.662 [2024-07-25 12:08:53.540757] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bf3ae0 00:27:07.662 [2024-07-25 12:08:53.540808] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.662 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.920 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.920 "name": "raid_bdev1", 00:27:07.920 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:07.920 "strip_size_kb": 0, 00:27:07.920 "state": "online", 00:27:07.920 "raid_level": "raid1", 00:27:07.920 "superblock": true, 00:27:07.920 "num_base_bdevs": 2, 00:27:07.920 "num_base_bdevs_discovered": 2, 00:27:07.920 "num_base_bdevs_operational": 2, 00:27:07.920 "base_bdevs_list": [ 00:27:07.920 { 00:27:07.920 "name": "pt1", 00:27:07.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:07.920 "is_configured": true, 00:27:07.920 "data_offset": 256, 00:27:07.920 "data_size": 7936 00:27:07.920 }, 00:27:07.920 { 00:27:07.920 "name": "pt2", 00:27:07.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.920 "is_configured": true, 00:27:07.920 "data_offset": 256, 00:27:07.920 "data_size": 7936 00:27:07.920 } 00:27:07.920 ] 00:27:07.920 }' 00:27:07.920 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.920 12:08:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:08.178 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:08.436 [2024-07-25 12:08:54.502031] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.436 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:08.436 "name": "raid_bdev1", 00:27:08.436 "aliases": [ 00:27:08.436 "211ba957-5494-4cc9-a935-8d33dfdb6147" 00:27:08.436 ], 00:27:08.436 "product_name": "Raid Volume", 00:27:08.436 "block_size": 4128, 00:27:08.436 "num_blocks": 7936, 00:27:08.436 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:08.436 "md_size": 32, 00:27:08.436 "md_interleave": true, 00:27:08.436 "dif_type": 0, 00:27:08.436 "assigned_rate_limits": { 00:27:08.436 "rw_ios_per_sec": 0, 00:27:08.436 "rw_mbytes_per_sec": 0, 00:27:08.436 "r_mbytes_per_sec": 0, 00:27:08.436 "w_mbytes_per_sec": 0 00:27:08.436 }, 00:27:08.436 "claimed": false, 00:27:08.436 "zoned": false, 00:27:08.436 "supported_io_types": { 00:27:08.436 "read": true, 00:27:08.436 "write": true, 00:27:08.436 "unmap": false, 00:27:08.437 "flush": false, 00:27:08.437 "reset": true, 00:27:08.437 "nvme_admin": false, 00:27:08.437 "nvme_io": false, 00:27:08.437 "nvme_io_md": false, 00:27:08.437 "write_zeroes": true, 00:27:08.437 "zcopy": false, 00:27:08.437 "get_zone_info": false, 00:27:08.437 "zone_management": false, 00:27:08.437 "zone_append": false, 00:27:08.437 "compare": false, 00:27:08.437 "compare_and_write": false, 00:27:08.437 "abort": false, 00:27:08.437 "seek_hole": false, 00:27:08.437 "seek_data": false, 00:27:08.437 "copy": false, 00:27:08.437 "nvme_iov_md": false 00:27:08.437 }, 00:27:08.437 "memory_domains": [ 00:27:08.437 { 00:27:08.437 "dma_device_id": "system", 00:27:08.437 "dma_device_type": 1 00:27:08.437 }, 00:27:08.437 { 00:27:08.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.437 "dma_device_type": 2 00:27:08.437 }, 00:27:08.437 { 00:27:08.437 "dma_device_id": "system", 00:27:08.437 "dma_device_type": 1 00:27:08.437 }, 00:27:08.437 { 00:27:08.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.437 "dma_device_type": 2 00:27:08.437 } 00:27:08.437 ], 00:27:08.437 "driver_specific": { 00:27:08.437 "raid": { 00:27:08.437 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:08.437 "strip_size_kb": 0, 00:27:08.437 "state": "online", 00:27:08.437 "raid_level": "raid1", 00:27:08.437 "superblock": true, 00:27:08.437 "num_base_bdevs": 2, 00:27:08.437 "num_base_bdevs_discovered": 2, 00:27:08.437 "num_base_bdevs_operational": 2, 00:27:08.437 "base_bdevs_list": [ 00:27:08.437 { 00:27:08.437 "name": "pt1", 00:27:08.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:08.437 "is_configured": true, 00:27:08.437 "data_offset": 256, 00:27:08.437 "data_size": 7936 00:27:08.437 }, 00:27:08.437 { 00:27:08.437 "name": "pt2", 00:27:08.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.437 "is_configured": true, 00:27:08.437 "data_offset": 256, 00:27:08.437 "data_size": 7936 00:27:08.437 } 00:27:08.437 ] 00:27:08.437 } 00:27:08.437 } 00:27:08.437 }' 00:27:08.437 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:08.694 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:08.694 pt2' 00:27:08.694 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:08.694 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:08.694 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:08.694 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:08.694 "name": "pt1", 00:27:08.694 "aliases": [ 00:27:08.694 "00000000-0000-0000-0000-000000000001" 00:27:08.694 ], 00:27:08.694 "product_name": "passthru", 00:27:08.694 "block_size": 4128, 00:27:08.694 "num_blocks": 8192, 00:27:08.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:08.694 "md_size": 32, 00:27:08.694 "md_interleave": true, 00:27:08.694 "dif_type": 0, 00:27:08.694 "assigned_rate_limits": { 00:27:08.694 "rw_ios_per_sec": 0, 00:27:08.694 "rw_mbytes_per_sec": 0, 00:27:08.694 "r_mbytes_per_sec": 0, 00:27:08.694 "w_mbytes_per_sec": 0 00:27:08.694 }, 00:27:08.694 "claimed": true, 00:27:08.694 "claim_type": "exclusive_write", 00:27:08.694 "zoned": false, 00:27:08.694 "supported_io_types": { 00:27:08.694 "read": true, 00:27:08.694 "write": true, 00:27:08.694 "unmap": true, 00:27:08.694 "flush": true, 00:27:08.694 "reset": true, 00:27:08.694 "nvme_admin": false, 00:27:08.694 "nvme_io": false, 00:27:08.694 "nvme_io_md": false, 00:27:08.694 "write_zeroes": true, 00:27:08.694 "zcopy": true, 00:27:08.694 "get_zone_info": false, 00:27:08.694 "zone_management": false, 00:27:08.694 "zone_append": false, 00:27:08.694 "compare": false, 00:27:08.694 "compare_and_write": false, 00:27:08.694 "abort": true, 00:27:08.694 "seek_hole": false, 00:27:08.694 "seek_data": false, 00:27:08.694 "copy": true, 00:27:08.694 "nvme_iov_md": false 00:27:08.694 }, 00:27:08.694 "memory_domains": [ 00:27:08.694 { 00:27:08.694 "dma_device_id": "system", 00:27:08.694 "dma_device_type": 1 00:27:08.694 }, 00:27:08.694 { 00:27:08.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.694 "dma_device_type": 2 00:27:08.694 } 00:27:08.694 ], 00:27:08.694 "driver_specific": { 00:27:08.694 "passthru": { 00:27:08.694 "name": "pt1", 00:27:08.694 "base_bdev_name": "malloc1" 00:27:08.694 } 00:27:08.694 } 00:27:08.694 }' 00:27:08.694 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:08.951 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:08.951 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:08.951 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:08.951 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:08.951 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:08.951 12:08:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:08.951 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:08.951 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:08.951 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.208 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.208 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:09.208 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:09.208 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:09.208 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.465 "name": "pt2", 00:27:09.465 "aliases": [ 00:27:09.465 "00000000-0000-0000-0000-000000000002" 00:27:09.465 ], 00:27:09.465 "product_name": "passthru", 00:27:09.465 "block_size": 4128, 00:27:09.465 "num_blocks": 8192, 00:27:09.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:09.465 "md_size": 32, 00:27:09.465 "md_interleave": true, 00:27:09.465 "dif_type": 0, 00:27:09.465 "assigned_rate_limits": { 00:27:09.465 "rw_ios_per_sec": 0, 00:27:09.465 "rw_mbytes_per_sec": 0, 00:27:09.465 "r_mbytes_per_sec": 0, 00:27:09.465 "w_mbytes_per_sec": 0 00:27:09.465 }, 00:27:09.465 "claimed": true, 00:27:09.465 "claim_type": "exclusive_write", 00:27:09.465 "zoned": false, 00:27:09.465 "supported_io_types": { 00:27:09.465 "read": true, 00:27:09.465 "write": true, 00:27:09.465 "unmap": true, 00:27:09.465 "flush": true, 00:27:09.465 "reset": true, 00:27:09.465 "nvme_admin": false, 00:27:09.465 "nvme_io": false, 00:27:09.465 "nvme_io_md": false, 00:27:09.465 "write_zeroes": true, 00:27:09.465 "zcopy": true, 00:27:09.465 "get_zone_info": false, 00:27:09.465 "zone_management": false, 00:27:09.465 "zone_append": false, 00:27:09.465 "compare": false, 00:27:09.465 "compare_and_write": false, 00:27:09.465 "abort": true, 00:27:09.465 "seek_hole": false, 00:27:09.465 "seek_data": false, 00:27:09.465 "copy": true, 00:27:09.465 "nvme_iov_md": false 00:27:09.465 }, 00:27:09.465 "memory_domains": [ 00:27:09.465 { 00:27:09.465 "dma_device_id": "system", 00:27:09.465 "dma_device_type": 1 00:27:09.465 }, 00:27:09.465 { 00:27:09.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.465 "dma_device_type": 2 00:27:09.465 } 00:27:09.465 ], 00:27:09.465 "driver_specific": { 00:27:09.465 "passthru": { 00:27:09.465 "name": "pt2", 00:27:09.465 "base_bdev_name": "malloc2" 00:27:09.465 } 00:27:09.465 } 00:27:09.465 }' 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.465 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.722 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:09.722 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.722 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.722 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:09.722 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.722 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:09.980 [2024-07-25 12:08:55.893699] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.980 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=211ba957-5494-4cc9-a935-8d33dfdb6147 00:27:09.980 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 211ba957-5494-4cc9-a935-8d33dfdb6147 ']' 00:27:09.980 12:08:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:10.238 [2024-07-25 12:08:56.122065] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:10.238 [2024-07-25 12:08:56.122082] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.239 [2024-07-25 12:08:56.122129] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.239 [2024-07-25 12:08:56.122188] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.239 [2024-07-25 12:08:56.122199] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bf3ae0 name raid_bdev1, state offline 00:27:10.239 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:10.239 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.497 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:10.497 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:10.497 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:10.497 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:10.497 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:10.497 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:10.755 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:10.755 12:08:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:11.013 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:11.013 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:11.013 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:27:11.013 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:11.014 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:11.272 [2024-07-25 12:08:57.236937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:11.272 [2024-07-25 12:08:57.238182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:11.272 [2024-07-25 12:08:57.238235] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:11.272 [2024-07-25 12:08:57.238271] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:11.272 [2024-07-25 12:08:57.238288] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:11.272 [2024-07-25 12:08:57.238297] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be97e0 name raid_bdev1, state configuring 00:27:11.272 request: 00:27:11.272 { 00:27:11.272 "name": "raid_bdev1", 00:27:11.272 "raid_level": "raid1", 00:27:11.272 "base_bdevs": [ 00:27:11.272 "malloc1", 00:27:11.272 "malloc2" 00:27:11.272 ], 00:27:11.272 "superblock": false, 00:27:11.272 "method": "bdev_raid_create", 00:27:11.272 "req_id": 1 00:27:11.272 } 00:27:11.272 Got JSON-RPC error response 00:27:11.272 response: 00:27:11.272 { 00:27:11.272 "code": -17, 00:27:11.272 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:11.272 } 00:27:11.272 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:27:11.272 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:11.272 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:11.272 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:11.272 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.273 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:11.531 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:11.531 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:11.531 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:11.789 [2024-07-25 12:08:57.682075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:11.789 [2024-07-25 12:08:57.682116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.789 [2024-07-25 12:08:57.682132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1becd90 00:27:11.790 [2024-07-25 12:08:57.682149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.790 [2024-07-25 12:08:57.683445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.790 [2024-07-25 12:08:57.683471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:11.790 [2024-07-25 12:08:57.683515] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:11.790 [2024-07-25 12:08:57.683538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:11.790 pt1 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.790 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.048 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.048 "name": "raid_bdev1", 00:27:12.048 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:12.048 "strip_size_kb": 0, 00:27:12.048 "state": "configuring", 00:27:12.048 "raid_level": "raid1", 00:27:12.048 "superblock": true, 00:27:12.048 "num_base_bdevs": 2, 00:27:12.048 "num_base_bdevs_discovered": 1, 00:27:12.048 "num_base_bdevs_operational": 2, 00:27:12.048 "base_bdevs_list": [ 00:27:12.048 { 00:27:12.048 "name": "pt1", 00:27:12.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:12.048 "is_configured": true, 00:27:12.048 "data_offset": 256, 00:27:12.048 "data_size": 7936 00:27:12.048 }, 00:27:12.048 { 00:27:12.048 "name": null, 00:27:12.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.048 "is_configured": false, 00:27:12.048 "data_offset": 256, 00:27:12.048 "data_size": 7936 00:27:12.048 } 00:27:12.048 ] 00:27:12.048 }' 00:27:12.048 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.048 12:08:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:12.615 [2024-07-25 12:08:58.636691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:12.615 [2024-07-25 12:08:58.636733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.615 [2024-07-25 12:08:58.636749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1beb080 00:27:12.615 [2024-07-25 12:08:58.636760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.615 [2024-07-25 12:08:58.636903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.615 [2024-07-25 12:08:58.636918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:12.615 [2024-07-25 12:08:58.636955] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:12.615 [2024-07-25 12:08:58.636972] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:12.615 [2024-07-25 12:08:58.637045] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bf36e0 00:27:12.615 [2024-07-25 12:08:58.637055] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:12.615 [2024-07-25 12:08:58.637105] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1beed90 00:27:12.615 [2024-07-25 12:08:58.637188] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bf36e0 00:27:12.615 [2024-07-25 12:08:58.637197] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bf36e0 00:27:12.615 [2024-07-25 12:08:58.637251] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.615 pt2 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:12.615 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.616 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.874 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.874 "name": "raid_bdev1", 00:27:12.874 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:12.874 "strip_size_kb": 0, 00:27:12.874 "state": "online", 00:27:12.874 "raid_level": "raid1", 00:27:12.874 "superblock": true, 00:27:12.874 "num_base_bdevs": 2, 00:27:12.875 "num_base_bdevs_discovered": 2, 00:27:12.875 "num_base_bdevs_operational": 2, 00:27:12.875 "base_bdevs_list": [ 00:27:12.875 { 00:27:12.875 "name": "pt1", 00:27:12.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:12.875 "is_configured": true, 00:27:12.875 "data_offset": 256, 00:27:12.875 "data_size": 7936 00:27:12.875 }, 00:27:12.875 { 00:27:12.875 "name": "pt2", 00:27:12.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.875 "is_configured": true, 00:27:12.875 "data_offset": 256, 00:27:12.875 "data_size": 7936 00:27:12.875 } 00:27:12.875 ] 00:27:12.875 }' 00:27:12.875 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.875 12:08:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:13.442 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:13.442 [2024-07-25 12:08:59.555341] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.700 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:13.700 "name": "raid_bdev1", 00:27:13.700 "aliases": [ 00:27:13.700 "211ba957-5494-4cc9-a935-8d33dfdb6147" 00:27:13.700 ], 00:27:13.700 "product_name": "Raid Volume", 00:27:13.700 "block_size": 4128, 00:27:13.700 "num_blocks": 7936, 00:27:13.700 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:13.700 "md_size": 32, 00:27:13.700 "md_interleave": true, 00:27:13.700 "dif_type": 0, 00:27:13.700 "assigned_rate_limits": { 00:27:13.700 "rw_ios_per_sec": 0, 00:27:13.700 "rw_mbytes_per_sec": 0, 00:27:13.700 "r_mbytes_per_sec": 0, 00:27:13.700 "w_mbytes_per_sec": 0 00:27:13.700 }, 00:27:13.700 "claimed": false, 00:27:13.700 "zoned": false, 00:27:13.700 "supported_io_types": { 00:27:13.700 "read": true, 00:27:13.700 "write": true, 00:27:13.700 "unmap": false, 00:27:13.700 "flush": false, 00:27:13.700 "reset": true, 00:27:13.700 "nvme_admin": false, 00:27:13.700 "nvme_io": false, 00:27:13.700 "nvme_io_md": false, 00:27:13.700 "write_zeroes": true, 00:27:13.700 "zcopy": false, 00:27:13.700 "get_zone_info": false, 00:27:13.700 "zone_management": false, 00:27:13.700 "zone_append": false, 00:27:13.700 "compare": false, 00:27:13.700 "compare_and_write": false, 00:27:13.700 "abort": false, 00:27:13.700 "seek_hole": false, 00:27:13.700 "seek_data": false, 00:27:13.700 "copy": false, 00:27:13.700 "nvme_iov_md": false 00:27:13.700 }, 00:27:13.700 "memory_domains": [ 00:27:13.700 { 00:27:13.700 "dma_device_id": "system", 00:27:13.700 "dma_device_type": 1 00:27:13.700 }, 00:27:13.700 { 00:27:13.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.700 "dma_device_type": 2 00:27:13.700 }, 00:27:13.700 { 00:27:13.700 "dma_device_id": "system", 00:27:13.700 "dma_device_type": 1 00:27:13.700 }, 00:27:13.700 { 00:27:13.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.700 "dma_device_type": 2 00:27:13.700 } 00:27:13.700 ], 00:27:13.701 "driver_specific": { 00:27:13.701 "raid": { 00:27:13.701 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:13.701 "strip_size_kb": 0, 00:27:13.701 "state": "online", 00:27:13.701 "raid_level": "raid1", 00:27:13.701 "superblock": true, 00:27:13.701 "num_base_bdevs": 2, 00:27:13.701 "num_base_bdevs_discovered": 2, 00:27:13.701 "num_base_bdevs_operational": 2, 00:27:13.701 "base_bdevs_list": [ 00:27:13.701 { 00:27:13.701 "name": "pt1", 00:27:13.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:13.701 "is_configured": true, 00:27:13.701 "data_offset": 256, 00:27:13.701 "data_size": 7936 00:27:13.701 }, 00:27:13.701 { 00:27:13.701 "name": "pt2", 00:27:13.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.701 "is_configured": true, 00:27:13.701 "data_offset": 256, 00:27:13.701 "data_size": 7936 00:27:13.701 } 00:27:13.701 ] 00:27:13.701 } 00:27:13.701 } 00:27:13.701 }' 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:13.701 pt2' 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:13.701 "name": "pt1", 00:27:13.701 "aliases": [ 00:27:13.701 "00000000-0000-0000-0000-000000000001" 00:27:13.701 ], 00:27:13.701 "product_name": "passthru", 00:27:13.701 "block_size": 4128, 00:27:13.701 "num_blocks": 8192, 00:27:13.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:13.701 "md_size": 32, 00:27:13.701 "md_interleave": true, 00:27:13.701 "dif_type": 0, 00:27:13.701 "assigned_rate_limits": { 00:27:13.701 "rw_ios_per_sec": 0, 00:27:13.701 "rw_mbytes_per_sec": 0, 00:27:13.701 "r_mbytes_per_sec": 0, 00:27:13.701 "w_mbytes_per_sec": 0 00:27:13.701 }, 00:27:13.701 "claimed": true, 00:27:13.701 "claim_type": "exclusive_write", 00:27:13.701 "zoned": false, 00:27:13.701 "supported_io_types": { 00:27:13.701 "read": true, 00:27:13.701 "write": true, 00:27:13.701 "unmap": true, 00:27:13.701 "flush": true, 00:27:13.701 "reset": true, 00:27:13.701 "nvme_admin": false, 00:27:13.701 "nvme_io": false, 00:27:13.701 "nvme_io_md": false, 00:27:13.701 "write_zeroes": true, 00:27:13.701 "zcopy": true, 00:27:13.701 "get_zone_info": false, 00:27:13.701 "zone_management": false, 00:27:13.701 "zone_append": false, 00:27:13.701 "compare": false, 00:27:13.701 "compare_and_write": false, 00:27:13.701 "abort": true, 00:27:13.701 "seek_hole": false, 00:27:13.701 "seek_data": false, 00:27:13.701 "copy": true, 00:27:13.701 "nvme_iov_md": false 00:27:13.701 }, 00:27:13.701 "memory_domains": [ 00:27:13.701 { 00:27:13.701 "dma_device_id": "system", 00:27:13.701 "dma_device_type": 1 00:27:13.701 }, 00:27:13.701 { 00:27:13.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.701 "dma_device_type": 2 00:27:13.701 } 00:27:13.701 ], 00:27:13.701 "driver_specific": { 00:27:13.701 "passthru": { 00:27:13.701 "name": "pt1", 00:27:13.701 "base_bdev_name": "malloc1" 00:27:13.701 } 00:27:13.701 } 00:27:13.701 }' 00:27:13.701 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:13.960 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:13.960 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:13.960 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:13.960 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:13.960 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:13.960 12:08:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:13.960 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:13.960 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:13.960 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:14.218 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:14.218 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:14.218 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:14.218 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:14.218 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:14.477 "name": "pt2", 00:27:14.477 "aliases": [ 00:27:14.477 "00000000-0000-0000-0000-000000000002" 00:27:14.477 ], 00:27:14.477 "product_name": "passthru", 00:27:14.477 "block_size": 4128, 00:27:14.477 "num_blocks": 8192, 00:27:14.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:14.477 "md_size": 32, 00:27:14.477 "md_interleave": true, 00:27:14.477 "dif_type": 0, 00:27:14.477 "assigned_rate_limits": { 00:27:14.477 "rw_ios_per_sec": 0, 00:27:14.477 "rw_mbytes_per_sec": 0, 00:27:14.477 "r_mbytes_per_sec": 0, 00:27:14.477 "w_mbytes_per_sec": 0 00:27:14.477 }, 00:27:14.477 "claimed": true, 00:27:14.477 "claim_type": "exclusive_write", 00:27:14.477 "zoned": false, 00:27:14.477 "supported_io_types": { 00:27:14.477 "read": true, 00:27:14.477 "write": true, 00:27:14.477 "unmap": true, 00:27:14.477 "flush": true, 00:27:14.477 "reset": true, 00:27:14.477 "nvme_admin": false, 00:27:14.477 "nvme_io": false, 00:27:14.477 "nvme_io_md": false, 00:27:14.477 "write_zeroes": true, 00:27:14.477 "zcopy": true, 00:27:14.477 "get_zone_info": false, 00:27:14.477 "zone_management": false, 00:27:14.477 "zone_append": false, 00:27:14.477 "compare": false, 00:27:14.477 "compare_and_write": false, 00:27:14.477 "abort": true, 00:27:14.477 "seek_hole": false, 00:27:14.477 "seek_data": false, 00:27:14.477 "copy": true, 00:27:14.477 "nvme_iov_md": false 00:27:14.477 }, 00:27:14.477 "memory_domains": [ 00:27:14.477 { 00:27:14.477 "dma_device_id": "system", 00:27:14.477 "dma_device_type": 1 00:27:14.477 }, 00:27:14.477 { 00:27:14.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.477 "dma_device_type": 2 00:27:14.477 } 00:27:14.477 ], 00:27:14.477 "driver_specific": { 00:27:14.477 "passthru": { 00:27:14.477 "name": "pt2", 00:27:14.477 "base_bdev_name": "malloc2" 00:27:14.477 } 00:27:14.477 } 00:27:14.477 }' 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:14.477 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:14.736 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:14.736 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:14.736 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:14.736 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:14.736 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:14.736 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:14.994 [2024-07-25 12:09:00.890875] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.994 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 211ba957-5494-4cc9-a935-8d33dfdb6147 '!=' 211ba957-5494-4cc9-a935-8d33dfdb6147 ']' 00:27:14.994 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:14.994 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:14.994 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:27:14.994 12:09:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:15.253 [2024-07-25 12:09:01.119256] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.253 "name": "raid_bdev1", 00:27:15.253 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:15.253 "strip_size_kb": 0, 00:27:15.253 "state": "online", 00:27:15.253 "raid_level": "raid1", 00:27:15.253 "superblock": true, 00:27:15.253 "num_base_bdevs": 2, 00:27:15.253 "num_base_bdevs_discovered": 1, 00:27:15.253 "num_base_bdevs_operational": 1, 00:27:15.253 "base_bdevs_list": [ 00:27:15.253 { 00:27:15.253 "name": null, 00:27:15.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.253 "is_configured": false, 00:27:15.253 "data_offset": 256, 00:27:15.253 "data_size": 7936 00:27:15.253 }, 00:27:15.253 { 00:27:15.253 "name": "pt2", 00:27:15.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:15.253 "is_configured": true, 00:27:15.253 "data_offset": 256, 00:27:15.253 "data_size": 7936 00:27:15.253 } 00:27:15.253 ] 00:27:15.253 }' 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.253 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:16.193 12:09:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:16.193 [2024-07-25 12:09:02.166001] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:16.193 [2024-07-25 12:09:02.166023] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:16.193 [2024-07-25 12:09:02.166069] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:16.193 [2024-07-25 12:09:02.166108] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:16.193 [2024-07-25 12:09:02.166118] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bf36e0 name raid_bdev1, state offline 00:27:16.193 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.193 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:16.452 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:16.452 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:16.452 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:16.452 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:16.452 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:16.710 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:16.711 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:16.711 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:16.711 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:16.711 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:27:16.711 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:16.970 [2024-07-25 12:09:02.839745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:16.970 [2024-07-25 12:09:02.839785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.970 [2024-07-25 12:09:02.839800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bee9a0 00:27:16.970 [2024-07-25 12:09:02.839811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.970 [2024-07-25 12:09:02.841135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.970 [2024-07-25 12:09:02.841171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:16.970 [2024-07-25 12:09:02.841214] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:16.970 [2024-07-25 12:09:02.841238] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:16.970 [2024-07-25 12:09:02.841304] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bee420 00:27:16.970 [2024-07-25 12:09:02.841314] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:16.970 [2024-07-25 12:09:02.841368] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a56260 00:27:16.970 [2024-07-25 12:09:02.841433] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bee420 00:27:16.970 [2024-07-25 12:09:02.841442] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bee420 00:27:16.970 [2024-07-25 12:09:02.841490] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.970 pt2 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.970 12:09:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.970 12:09:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.970 "name": "raid_bdev1", 00:27:16.970 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:16.970 "strip_size_kb": 0, 00:27:16.970 "state": "online", 00:27:16.970 "raid_level": "raid1", 00:27:16.970 "superblock": true, 00:27:16.970 "num_base_bdevs": 2, 00:27:16.970 "num_base_bdevs_discovered": 1, 00:27:16.970 "num_base_bdevs_operational": 1, 00:27:16.970 "base_bdevs_list": [ 00:27:16.970 { 00:27:16.970 "name": null, 00:27:16.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.970 "is_configured": false, 00:27:16.970 "data_offset": 256, 00:27:16.970 "data_size": 7936 00:27:16.970 }, 00:27:16.970 { 00:27:16.970 "name": "pt2", 00:27:16.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:16.970 "is_configured": true, 00:27:16.970 "data_offset": 256, 00:27:16.970 "data_size": 7936 00:27:16.970 } 00:27:16.970 ] 00:27:16.970 }' 00:27:16.970 12:09:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.970 12:09:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:17.536 12:09:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:17.794 [2024-07-25 12:09:03.786234] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:17.794 [2024-07-25 12:09:03.786256] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:17.794 [2024-07-25 12:09:03.786298] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:17.794 [2024-07-25 12:09:03.786338] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:17.794 [2024-07-25 12:09:03.786348] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bee420 name raid_bdev1, state offline 00:27:17.794 12:09:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:17.794 12:09:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.052 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:18.052 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:18.052 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:27:18.052 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:18.310 [2024-07-25 12:09:04.247431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:18.310 [2024-07-25 12:09:04.247468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.310 [2024-07-25 12:09:04.247484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bee6a0 00:27:18.310 [2024-07-25 12:09:04.247495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.310 [2024-07-25 12:09:04.248802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.310 [2024-07-25 12:09:04.248827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:18.310 [2024-07-25 12:09:04.248870] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:18.310 [2024-07-25 12:09:04.248894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:18.310 [2024-07-25 12:09:04.248966] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:18.310 [2024-07-25 12:09:04.248978] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.310 [2024-07-25 12:09:04.248991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bf1550 name raid_bdev1, state configuring 00:27:18.310 [2024-07-25 12:09:04.249012] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:18.310 [2024-07-25 12:09:04.249058] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bf1ff0 00:27:18.310 [2024-07-25 12:09:04.249067] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:18.310 [2024-07-25 12:09:04.249115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bef4d0 00:27:18.310 [2024-07-25 12:09:04.249190] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bf1ff0 00:27:18.310 [2024-07-25 12:09:04.249200] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bf1ff0 00:27:18.310 [2024-07-25 12:09:04.249253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.310 pt1 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.310 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.569 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.569 "name": "raid_bdev1", 00:27:18.569 "uuid": "211ba957-5494-4cc9-a935-8d33dfdb6147", 00:27:18.569 "strip_size_kb": 0, 00:27:18.569 "state": "online", 00:27:18.569 "raid_level": "raid1", 00:27:18.569 "superblock": true, 00:27:18.569 "num_base_bdevs": 2, 00:27:18.569 "num_base_bdevs_discovered": 1, 00:27:18.569 "num_base_bdevs_operational": 1, 00:27:18.569 "base_bdevs_list": [ 00:27:18.569 { 00:27:18.569 "name": null, 00:27:18.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.569 "is_configured": false, 00:27:18.569 "data_offset": 256, 00:27:18.569 "data_size": 7936 00:27:18.569 }, 00:27:18.569 { 00:27:18.569 "name": "pt2", 00:27:18.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:18.569 "is_configured": true, 00:27:18.569 "data_offset": 256, 00:27:18.569 "data_size": 7936 00:27:18.569 } 00:27:18.569 ] 00:27:18.569 }' 00:27:18.569 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.569 12:09:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:19.137 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:19.137 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:19.395 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:19.395 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:19.395 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:19.654 [2024-07-25 12:09:05.514967] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 211ba957-5494-4cc9-a935-8d33dfdb6147 '!=' 211ba957-5494-4cc9-a935-8d33dfdb6147 ']' 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 86017 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 86017 ']' 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 86017 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86017 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86017' 00:27:19.654 killing process with pid 86017 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 86017 00:27:19.654 [2024-07-25 12:09:05.618164] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.654 [2024-07-25 12:09:05.618215] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.654 [2024-07-25 12:09:05.618254] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.654 [2024-07-25 12:09:05.618265] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bf1ff0 name raid_bdev1, state offline 00:27:19.654 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 86017 00:27:19.654 [2024-07-25 12:09:05.634406] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.914 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:27:19.914 00:27:19.914 real 0m14.358s 00:27:19.914 user 0m25.956s 00:27:19.914 sys 0m2.644s 00:27:19.914 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.914 12:09:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:19.914 ************************************ 00:27:19.914 END TEST raid_superblock_test_md_interleaved 00:27:19.914 ************************************ 00:27:19.914 12:09:05 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:27:19.914 12:09:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:19.914 12:09:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.914 12:09:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.914 ************************************ 00:27:19.914 START TEST raid_rebuild_test_sb_md_interleaved 00:27:19.914 ************************************ 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=89192 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 89192 /var/tmp/spdk-raid.sock 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89192 ']' 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:19.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.914 12:09:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:19.915 [2024-07-25 12:09:05.955962] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:19.915 [2024-07-25 12:09:05.956019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89192 ] 00:27:19.915 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:19.915 Zero copy mechanism will not be used. 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:19.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:19.915 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:20.173 [2024-07-25 12:09:06.087181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.173 [2024-07-25 12:09:06.172969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.173 [2024-07-25 12:09:06.234653] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.173 [2024-07-25 12:09:06.234694] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.108 12:09:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.108 12:09:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:27:21.108 12:09:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:21.108 12:09:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:27:21.108 BaseBdev1_malloc 00:27:21.108 12:09:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:21.367 [2024-07-25 12:09:07.291469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:21.367 [2024-07-25 12:09:07.291510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.367 [2024-07-25 12:09:07.291532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1333610 00:27:21.367 [2024-07-25 12:09:07.291543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.367 [2024-07-25 12:09:07.293026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.367 [2024-07-25 12:09:07.293055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:21.367 BaseBdev1 00:27:21.367 12:09:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:21.367 12:09:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:27:21.626 BaseBdev2_malloc 00:27:21.626 12:09:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:21.626 [2024-07-25 12:09:07.741501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:21.626 [2024-07-25 12:09:07.741542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.626 [2024-07-25 12:09:07.741561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132acc0 00:27:21.626 [2024-07-25 12:09:07.741573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.627 [2024-07-25 12:09:07.742767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.627 [2024-07-25 12:09:07.742792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:21.885 BaseBdev2 00:27:21.885 12:09:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:27:21.885 spare_malloc 00:27:21.885 12:09:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:22.143 spare_delay 00:27:22.144 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:22.402 [2024-07-25 12:09:08.431912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:22.402 [2024-07-25 12:09:08.431952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.402 [2024-07-25 12:09:08.431971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132b8e0 00:27:22.402 [2024-07-25 12:09:08.431982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.402 [2024-07-25 12:09:08.433164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.402 [2024-07-25 12:09:08.433192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:22.402 spare 00:27:22.402 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:22.661 [2024-07-25 12:09:08.672567] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.661 [2024-07-25 12:09:08.673726] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.661 [2024-07-25 12:09:08.673877] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x132e2b0 00:27:22.661 [2024-07-25 12:09:08.673890] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:22.661 [2024-07-25 12:09:08.673951] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1196210 00:27:22.661 [2024-07-25 12:09:08.674028] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132e2b0 00:27:22.661 [2024-07-25 12:09:08.674037] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132e2b0 00:27:22.661 [2024-07-25 12:09:08.674088] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.661 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.919 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.919 "name": "raid_bdev1", 00:27:22.919 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:22.919 "strip_size_kb": 0, 00:27:22.919 "state": "online", 00:27:22.919 "raid_level": "raid1", 00:27:22.919 "superblock": true, 00:27:22.919 "num_base_bdevs": 2, 00:27:22.919 "num_base_bdevs_discovered": 2, 00:27:22.919 "num_base_bdevs_operational": 2, 00:27:22.919 "base_bdevs_list": [ 00:27:22.919 { 00:27:22.919 "name": "BaseBdev1", 00:27:22.919 "uuid": "674bf968-1004-5ae9-b2d9-0e39b7f31a63", 00:27:22.919 "is_configured": true, 00:27:22.919 "data_offset": 256, 00:27:22.919 "data_size": 7936 00:27:22.919 }, 00:27:22.919 { 00:27:22.919 "name": "BaseBdev2", 00:27:22.919 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:22.919 "is_configured": true, 00:27:22.919 "data_offset": 256, 00:27:22.919 "data_size": 7936 00:27:22.919 } 00:27:22.919 ] 00:27:22.919 }' 00:27:22.920 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.920 12:09:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:23.487 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:23.487 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:23.746 [2024-07-25 12:09:09.751613] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.746 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:27:23.746 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.746 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:24.004 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:27:24.004 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:24.004 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:27:24.004 12:09:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:24.262 [2024-07-25 12:09:10.132388] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.262 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.520 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.520 "name": "raid_bdev1", 00:27:24.520 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:24.520 "strip_size_kb": 0, 00:27:24.520 "state": "online", 00:27:24.520 "raid_level": "raid1", 00:27:24.520 "superblock": true, 00:27:24.520 "num_base_bdevs": 2, 00:27:24.520 "num_base_bdevs_discovered": 1, 00:27:24.520 "num_base_bdevs_operational": 1, 00:27:24.520 "base_bdevs_list": [ 00:27:24.520 { 00:27:24.520 "name": null, 00:27:24.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.520 "is_configured": false, 00:27:24.520 "data_offset": 256, 00:27:24.520 "data_size": 7936 00:27:24.520 }, 00:27:24.520 { 00:27:24.520 "name": "BaseBdev2", 00:27:24.520 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:24.521 "is_configured": true, 00:27:24.521 "data_offset": 256, 00:27:24.521 "data_size": 7936 00:27:24.521 } 00:27:24.521 ] 00:27:24.521 }' 00:27:24.521 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.521 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:25.088 12:09:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:25.088 [2024-07-25 12:09:11.131038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:25.088 [2024-07-25 12:09:11.134469] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132f070 00:27:25.088 [2024-07-25 12:09:11.136579] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:25.088 12:09:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.463 "name": "raid_bdev1", 00:27:26.463 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:26.463 "strip_size_kb": 0, 00:27:26.463 "state": "online", 00:27:26.463 "raid_level": "raid1", 00:27:26.463 "superblock": true, 00:27:26.463 "num_base_bdevs": 2, 00:27:26.463 "num_base_bdevs_discovered": 2, 00:27:26.463 "num_base_bdevs_operational": 2, 00:27:26.463 "process": { 00:27:26.463 "type": "rebuild", 00:27:26.463 "target": "spare", 00:27:26.463 "progress": { 00:27:26.463 "blocks": 3072, 00:27:26.463 "percent": 38 00:27:26.463 } 00:27:26.463 }, 00:27:26.463 "base_bdevs_list": [ 00:27:26.463 { 00:27:26.463 "name": "spare", 00:27:26.463 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:26.463 "is_configured": true, 00:27:26.463 "data_offset": 256, 00:27:26.463 "data_size": 7936 00:27:26.463 }, 00:27:26.463 { 00:27:26.463 "name": "BaseBdev2", 00:27:26.463 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:26.463 "is_configured": true, 00:27:26.463 "data_offset": 256, 00:27:26.463 "data_size": 7936 00:27:26.463 } 00:27:26.463 ] 00:27:26.463 }' 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.463 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:26.722 [2024-07-25 12:09:12.677603] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.722 [2024-07-25 12:09:12.748392] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.722 [2024-07-25 12:09:12.748433] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.722 [2024-07-25 12:09:12.748447] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.722 [2024-07-25 12:09:12.748454] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.722 12:09:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.980 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.980 "name": "raid_bdev1", 00:27:26.980 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:26.980 "strip_size_kb": 0, 00:27:26.980 "state": "online", 00:27:26.980 "raid_level": "raid1", 00:27:26.980 "superblock": true, 00:27:26.980 "num_base_bdevs": 2, 00:27:26.980 "num_base_bdevs_discovered": 1, 00:27:26.980 "num_base_bdevs_operational": 1, 00:27:26.980 "base_bdevs_list": [ 00:27:26.980 { 00:27:26.980 "name": null, 00:27:26.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.980 "is_configured": false, 00:27:26.980 "data_offset": 256, 00:27:26.980 "data_size": 7936 00:27:26.980 }, 00:27:26.980 { 00:27:26.980 "name": "BaseBdev2", 00:27:26.980 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:26.980 "is_configured": true, 00:27:26.980 "data_offset": 256, 00:27:26.980 "data_size": 7936 00:27:26.980 } 00:27:26.980 ] 00:27:26.980 }' 00:27:26.980 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.980 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.573 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.831 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:27.831 "name": "raid_bdev1", 00:27:27.831 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:27.831 "strip_size_kb": 0, 00:27:27.831 "state": "online", 00:27:27.831 "raid_level": "raid1", 00:27:27.831 "superblock": true, 00:27:27.832 "num_base_bdevs": 2, 00:27:27.832 "num_base_bdevs_discovered": 1, 00:27:27.832 "num_base_bdevs_operational": 1, 00:27:27.832 "base_bdevs_list": [ 00:27:27.832 { 00:27:27.832 "name": null, 00:27:27.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.832 "is_configured": false, 00:27:27.832 "data_offset": 256, 00:27:27.832 "data_size": 7936 00:27:27.832 }, 00:27:27.832 { 00:27:27.832 "name": "BaseBdev2", 00:27:27.832 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:27.832 "is_configured": true, 00:27:27.832 "data_offset": 256, 00:27:27.832 "data_size": 7936 00:27:27.832 } 00:27:27.832 ] 00:27:27.832 }' 00:27:27.832 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:27.832 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:27.832 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:27.832 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:27.832 12:09:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:28.091 [2024-07-25 12:09:14.091436] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:28.091 [2024-07-25 12:09:14.094833] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132bc40 00:27:28.091 [2024-07-25 12:09:14.096194] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:28.091 12:09:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.025 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.287 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.287 "name": "raid_bdev1", 00:27:29.287 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:29.287 "strip_size_kb": 0, 00:27:29.287 "state": "online", 00:27:29.287 "raid_level": "raid1", 00:27:29.287 "superblock": true, 00:27:29.287 "num_base_bdevs": 2, 00:27:29.287 "num_base_bdevs_discovered": 2, 00:27:29.287 "num_base_bdevs_operational": 2, 00:27:29.287 "process": { 00:27:29.287 "type": "rebuild", 00:27:29.287 "target": "spare", 00:27:29.287 "progress": { 00:27:29.287 "blocks": 3072, 00:27:29.287 "percent": 38 00:27:29.287 } 00:27:29.287 }, 00:27:29.287 "base_bdevs_list": [ 00:27:29.287 { 00:27:29.287 "name": "spare", 00:27:29.287 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:29.287 "is_configured": true, 00:27:29.287 "data_offset": 256, 00:27:29.287 "data_size": 7936 00:27:29.287 }, 00:27:29.287 { 00:27:29.287 "name": "BaseBdev2", 00:27:29.287 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:29.287 "is_configured": true, 00:27:29.287 "data_offset": 256, 00:27:29.287 "data_size": 7936 00:27:29.287 } 00:27:29.287 ] 00:27:29.287 }' 00:27:29.287 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.287 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.288 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.590 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.590 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:29.590 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:29.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1060 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.591 "name": "raid_bdev1", 00:27:29.591 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:29.591 "strip_size_kb": 0, 00:27:29.591 "state": "online", 00:27:29.591 "raid_level": "raid1", 00:27:29.591 "superblock": true, 00:27:29.591 "num_base_bdevs": 2, 00:27:29.591 "num_base_bdevs_discovered": 2, 00:27:29.591 "num_base_bdevs_operational": 2, 00:27:29.591 "process": { 00:27:29.591 "type": "rebuild", 00:27:29.591 "target": "spare", 00:27:29.591 "progress": { 00:27:29.591 "blocks": 3840, 00:27:29.591 "percent": 48 00:27:29.591 } 00:27:29.591 }, 00:27:29.591 "base_bdevs_list": [ 00:27:29.591 { 00:27:29.591 "name": "spare", 00:27:29.591 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:29.591 "is_configured": true, 00:27:29.591 "data_offset": 256, 00:27:29.591 "data_size": 7936 00:27:29.591 }, 00:27:29.591 { 00:27:29.591 "name": "BaseBdev2", 00:27:29.591 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:29.591 "is_configured": true, 00:27:29.591 "data_offset": 256, 00:27:29.591 "data_size": 7936 00:27:29.591 } 00:27:29.591 ] 00:27:29.591 }' 00:27:29.591 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.863 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.863 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.863 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.863 12:09:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.799 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.058 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.058 "name": "raid_bdev1", 00:27:31.058 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:31.058 "strip_size_kb": 0, 00:27:31.058 "state": "online", 00:27:31.058 "raid_level": "raid1", 00:27:31.058 "superblock": true, 00:27:31.058 "num_base_bdevs": 2, 00:27:31.058 "num_base_bdevs_discovered": 2, 00:27:31.058 "num_base_bdevs_operational": 2, 00:27:31.058 "process": { 00:27:31.058 "type": "rebuild", 00:27:31.058 "target": "spare", 00:27:31.058 "progress": { 00:27:31.058 "blocks": 7168, 00:27:31.058 "percent": 90 00:27:31.058 } 00:27:31.058 }, 00:27:31.058 "base_bdevs_list": [ 00:27:31.058 { 00:27:31.058 "name": "spare", 00:27:31.058 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:31.058 "is_configured": true, 00:27:31.058 "data_offset": 256, 00:27:31.058 "data_size": 7936 00:27:31.058 }, 00:27:31.058 { 00:27:31.058 "name": "BaseBdev2", 00:27:31.058 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:31.058 "is_configured": true, 00:27:31.058 "data_offset": 256, 00:27:31.058 "data_size": 7936 00:27:31.058 } 00:27:31.058 ] 00:27:31.058 }' 00:27:31.058 12:09:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.058 12:09:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.058 12:09:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.058 12:09:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.058 12:09:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:31.316 [2024-07-25 12:09:17.218588] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:31.316 [2024-07-25 12:09:17.218640] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:31.316 [2024-07-25 12:09:17.218715] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.251 "name": "raid_bdev1", 00:27:32.251 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:32.251 "strip_size_kb": 0, 00:27:32.251 "state": "online", 00:27:32.251 "raid_level": "raid1", 00:27:32.251 "superblock": true, 00:27:32.251 "num_base_bdevs": 2, 00:27:32.251 "num_base_bdevs_discovered": 2, 00:27:32.251 "num_base_bdevs_operational": 2, 00:27:32.251 "base_bdevs_list": [ 00:27:32.251 { 00:27:32.251 "name": "spare", 00:27:32.251 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:32.251 "is_configured": true, 00:27:32.251 "data_offset": 256, 00:27:32.251 "data_size": 7936 00:27:32.251 }, 00:27:32.251 { 00:27:32.251 "name": "BaseBdev2", 00:27:32.251 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:32.251 "is_configured": true, 00:27:32.251 "data_offset": 256, 00:27:32.251 "data_size": 7936 00:27:32.251 } 00:27:32.251 ] 00:27:32.251 }' 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.251 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.510 "name": "raid_bdev1", 00:27:32.510 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:32.510 "strip_size_kb": 0, 00:27:32.510 "state": "online", 00:27:32.510 "raid_level": "raid1", 00:27:32.510 "superblock": true, 00:27:32.510 "num_base_bdevs": 2, 00:27:32.510 "num_base_bdevs_discovered": 2, 00:27:32.510 "num_base_bdevs_operational": 2, 00:27:32.510 "base_bdevs_list": [ 00:27:32.510 { 00:27:32.510 "name": "spare", 00:27:32.510 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:32.510 "is_configured": true, 00:27:32.510 "data_offset": 256, 00:27:32.510 "data_size": 7936 00:27:32.510 }, 00:27:32.510 { 00:27:32.510 "name": "BaseBdev2", 00:27:32.510 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:32.510 "is_configured": true, 00:27:32.510 "data_offset": 256, 00:27:32.510 "data_size": 7936 00:27:32.510 } 00:27:32.510 ] 00:27:32.510 }' 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.510 12:09:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.077 12:09:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.077 "name": "raid_bdev1", 00:27:33.077 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:33.077 "strip_size_kb": 0, 00:27:33.077 "state": "online", 00:27:33.077 "raid_level": "raid1", 00:27:33.077 "superblock": true, 00:27:33.077 "num_base_bdevs": 2, 00:27:33.077 "num_base_bdevs_discovered": 2, 00:27:33.077 "num_base_bdevs_operational": 2, 00:27:33.077 "base_bdevs_list": [ 00:27:33.077 { 00:27:33.077 "name": "spare", 00:27:33.077 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:33.077 "is_configured": true, 00:27:33.077 "data_offset": 256, 00:27:33.077 "data_size": 7936 00:27:33.077 }, 00:27:33.077 { 00:27:33.077 "name": "BaseBdev2", 00:27:33.077 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:33.077 "is_configured": true, 00:27:33.077 "data_offset": 256, 00:27:33.077 "data_size": 7936 00:27:33.077 } 00:27:33.077 ] 00:27:33.077 }' 00:27:33.077 12:09:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.077 12:09:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:33.653 12:09:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:33.911 [2024-07-25 12:09:19.894182] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.911 [2024-07-25 12:09:19.894206] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:33.911 [2024-07-25 12:09:19.894255] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.911 [2024-07-25 12:09:19.894305] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.911 [2024-07-25 12:09:19.894316] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132e2b0 name raid_bdev1, state offline 00:27:33.911 12:09:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.911 12:09:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:27:34.170 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:34.170 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:27:34.170 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:34.170 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:34.429 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:34.688 [2024-07-25 12:09:20.567912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:34.688 [2024-07-25 12:09:20.567952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.688 [2024-07-25 12:09:20.567969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1331a20 00:27:34.688 [2024-07-25 12:09:20.567981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.688 [2024-07-25 12:09:20.569586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.688 [2024-07-25 12:09:20.569616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:34.688 [2024-07-25 12:09:20.569666] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:34.688 [2024-07-25 12:09:20.569692] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:34.688 [2024-07-25 12:09:20.569771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:34.688 spare 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.688 [2024-07-25 12:09:20.670076] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x132ee10 00:27:34.688 [2024-07-25 12:09:20.670090] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:34.688 [2024-07-25 12:09:20.670163] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11967c0 00:27:34.688 [2024-07-25 12:09:20.670251] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132ee10 00:27:34.688 [2024-07-25 12:09:20.670260] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132ee10 00:27:34.688 [2024-07-25 12:09:20.670319] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.688 "name": "raid_bdev1", 00:27:34.688 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:34.688 "strip_size_kb": 0, 00:27:34.688 "state": "online", 00:27:34.688 "raid_level": "raid1", 00:27:34.688 "superblock": true, 00:27:34.688 "num_base_bdevs": 2, 00:27:34.688 "num_base_bdevs_discovered": 2, 00:27:34.688 "num_base_bdevs_operational": 2, 00:27:34.688 "base_bdevs_list": [ 00:27:34.688 { 00:27:34.688 "name": "spare", 00:27:34.688 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:34.688 "is_configured": true, 00:27:34.688 "data_offset": 256, 00:27:34.688 "data_size": 7936 00:27:34.688 }, 00:27:34.688 { 00:27:34.688 "name": "BaseBdev2", 00:27:34.688 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:34.688 "is_configured": true, 00:27:34.688 "data_offset": 256, 00:27:34.688 "data_size": 7936 00:27:34.688 } 00:27:34.688 ] 00:27:34.688 }' 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.688 12:09:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.255 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:35.514 "name": "raid_bdev1", 00:27:35.514 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:35.514 "strip_size_kb": 0, 00:27:35.514 "state": "online", 00:27:35.514 "raid_level": "raid1", 00:27:35.514 "superblock": true, 00:27:35.514 "num_base_bdevs": 2, 00:27:35.514 "num_base_bdevs_discovered": 2, 00:27:35.514 "num_base_bdevs_operational": 2, 00:27:35.514 "base_bdevs_list": [ 00:27:35.514 { 00:27:35.514 "name": "spare", 00:27:35.514 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:35.514 "is_configured": true, 00:27:35.514 "data_offset": 256, 00:27:35.514 "data_size": 7936 00:27:35.514 }, 00:27:35.514 { 00:27:35.514 "name": "BaseBdev2", 00:27:35.514 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:35.514 "is_configured": true, 00:27:35.514 "data_offset": 256, 00:27:35.514 "data_size": 7936 00:27:35.514 } 00:27:35.514 ] 00:27:35.514 }' 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:35.514 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.773 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:35.773 12:09:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:36.032 [2024-07-25 12:09:21.995765] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.032 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.290 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.290 "name": "raid_bdev1", 00:27:36.290 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:36.290 "strip_size_kb": 0, 00:27:36.290 "state": "online", 00:27:36.290 "raid_level": "raid1", 00:27:36.290 "superblock": true, 00:27:36.290 "num_base_bdevs": 2, 00:27:36.290 "num_base_bdevs_discovered": 1, 00:27:36.290 "num_base_bdevs_operational": 1, 00:27:36.290 "base_bdevs_list": [ 00:27:36.290 { 00:27:36.290 "name": null, 00:27:36.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.290 "is_configured": false, 00:27:36.290 "data_offset": 256, 00:27:36.290 "data_size": 7936 00:27:36.290 }, 00:27:36.290 { 00:27:36.290 "name": "BaseBdev2", 00:27:36.290 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:36.290 "is_configured": true, 00:27:36.290 "data_offset": 256, 00:27:36.290 "data_size": 7936 00:27:36.290 } 00:27:36.290 ] 00:27:36.290 }' 00:27:36.290 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.290 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:36.856 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:36.856 [2024-07-25 12:09:22.950294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:36.856 [2024-07-25 12:09:22.950425] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:36.856 [2024-07-25 12:09:22.950440] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:36.856 [2024-07-25 12:09:22.950466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:36.856 [2024-07-25 12:09:22.953758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1321780 00:27:36.856 [2024-07-25 12:09:22.955906] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:36.856 12:09:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.238 12:09:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.238 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:38.238 "name": "raid_bdev1", 00:27:38.238 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:38.238 "strip_size_kb": 0, 00:27:38.238 "state": "online", 00:27:38.238 "raid_level": "raid1", 00:27:38.238 "superblock": true, 00:27:38.238 "num_base_bdevs": 2, 00:27:38.238 "num_base_bdevs_discovered": 2, 00:27:38.238 "num_base_bdevs_operational": 2, 00:27:38.238 "process": { 00:27:38.238 "type": "rebuild", 00:27:38.238 "target": "spare", 00:27:38.238 "progress": { 00:27:38.238 "blocks": 3072, 00:27:38.238 "percent": 38 00:27:38.238 } 00:27:38.238 }, 00:27:38.238 "base_bdevs_list": [ 00:27:38.238 { 00:27:38.238 "name": "spare", 00:27:38.238 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:38.238 "is_configured": true, 00:27:38.238 "data_offset": 256, 00:27:38.238 "data_size": 7936 00:27:38.238 }, 00:27:38.238 { 00:27:38.238 "name": "BaseBdev2", 00:27:38.238 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:38.238 "is_configured": true, 00:27:38.238 "data_offset": 256, 00:27:38.238 "data_size": 7936 00:27:38.238 } 00:27:38.238 ] 00:27:38.238 }' 00:27:38.238 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:38.238 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.238 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:38.238 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.238 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:38.497 [2024-07-25 12:09:24.512936] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.497 [2024-07-25 12:09:24.567635] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:38.497 [2024-07-25 12:09:24.567674] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.497 [2024-07-25 12:09:24.567687] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.497 [2024-07-25 12:09:24.567695] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.497 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.756 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.756 "name": "raid_bdev1", 00:27:38.756 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:38.756 "strip_size_kb": 0, 00:27:38.756 "state": "online", 00:27:38.756 "raid_level": "raid1", 00:27:38.756 "superblock": true, 00:27:38.756 "num_base_bdevs": 2, 00:27:38.756 "num_base_bdevs_discovered": 1, 00:27:38.756 "num_base_bdevs_operational": 1, 00:27:38.756 "base_bdevs_list": [ 00:27:38.756 { 00:27:38.756 "name": null, 00:27:38.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.756 "is_configured": false, 00:27:38.756 "data_offset": 256, 00:27:38.756 "data_size": 7936 00:27:38.756 }, 00:27:38.756 { 00:27:38.756 "name": "BaseBdev2", 00:27:38.756 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:38.756 "is_configured": true, 00:27:38.756 "data_offset": 256, 00:27:38.756 "data_size": 7936 00:27:38.756 } 00:27:38.756 ] 00:27:38.756 }' 00:27:38.756 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.756 12:09:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:39.323 12:09:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:39.581 [2024-07-25 12:09:25.645966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:39.581 [2024-07-25 12:09:25.646008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.581 [2024-07-25 12:09:25.646030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1320fa0 00:27:39.581 [2024-07-25 12:09:25.646042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.581 [2024-07-25 12:09:25.646223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.581 [2024-07-25 12:09:25.646240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:39.581 [2024-07-25 12:09:25.646291] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:39.581 [2024-07-25 12:09:25.646303] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:39.581 [2024-07-25 12:09:25.646313] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:39.581 [2024-07-25 12:09:25.646331] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:39.581 [2024-07-25 12:09:25.649639] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1331800 00:27:39.581 [2024-07-25 12:09:25.650993] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:39.581 spare 00:27:39.582 12:09:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.957 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.957 "name": "raid_bdev1", 00:27:40.957 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:40.957 "strip_size_kb": 0, 00:27:40.957 "state": "online", 00:27:40.957 "raid_level": "raid1", 00:27:40.957 "superblock": true, 00:27:40.957 "num_base_bdevs": 2, 00:27:40.957 "num_base_bdevs_discovered": 2, 00:27:40.957 "num_base_bdevs_operational": 2, 00:27:40.957 "process": { 00:27:40.957 "type": "rebuild", 00:27:40.958 "target": "spare", 00:27:40.958 "progress": { 00:27:40.958 "blocks": 3072, 00:27:40.958 "percent": 38 00:27:40.958 } 00:27:40.958 }, 00:27:40.958 "base_bdevs_list": [ 00:27:40.958 { 00:27:40.958 "name": "spare", 00:27:40.958 "uuid": "1b276759-aa09-5a04-b3a5-3050f72720ba", 00:27:40.958 "is_configured": true, 00:27:40.958 "data_offset": 256, 00:27:40.958 "data_size": 7936 00:27:40.958 }, 00:27:40.958 { 00:27:40.958 "name": "BaseBdev2", 00:27:40.958 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:40.958 "is_configured": true, 00:27:40.958 "data_offset": 256, 00:27:40.958 "data_size": 7936 00:27:40.958 } 00:27:40.958 ] 00:27:40.958 }' 00:27:40.958 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:40.958 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:40.958 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:40.958 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:40.958 12:09:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:41.216 [2024-07-25 12:09:27.208023] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:41.216 [2024-07-25 12:09:27.262759] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:41.216 [2024-07-25 12:09:27.262803] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:41.216 [2024-07-25 12:09:27.262817] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:41.216 [2024-07-25 12:09:27.262824] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.216 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.475 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.475 "name": "raid_bdev1", 00:27:41.475 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:41.475 "strip_size_kb": 0, 00:27:41.475 "state": "online", 00:27:41.475 "raid_level": "raid1", 00:27:41.475 "superblock": true, 00:27:41.475 "num_base_bdevs": 2, 00:27:41.475 "num_base_bdevs_discovered": 1, 00:27:41.475 "num_base_bdevs_operational": 1, 00:27:41.475 "base_bdevs_list": [ 00:27:41.475 { 00:27:41.475 "name": null, 00:27:41.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.475 "is_configured": false, 00:27:41.475 "data_offset": 256, 00:27:41.475 "data_size": 7936 00:27:41.475 }, 00:27:41.475 { 00:27:41.475 "name": "BaseBdev2", 00:27:41.475 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:41.475 "is_configured": true, 00:27:41.475 "data_offset": 256, 00:27:41.475 "data_size": 7936 00:27:41.475 } 00:27:41.475 ] 00:27:41.475 }' 00:27:41.475 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.475 12:09:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.041 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.299 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.299 "name": "raid_bdev1", 00:27:42.299 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:42.299 "strip_size_kb": 0, 00:27:42.299 "state": "online", 00:27:42.299 "raid_level": "raid1", 00:27:42.299 "superblock": true, 00:27:42.299 "num_base_bdevs": 2, 00:27:42.299 "num_base_bdevs_discovered": 1, 00:27:42.299 "num_base_bdevs_operational": 1, 00:27:42.299 "base_bdevs_list": [ 00:27:42.299 { 00:27:42.299 "name": null, 00:27:42.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.299 "is_configured": false, 00:27:42.299 "data_offset": 256, 00:27:42.299 "data_size": 7936 00:27:42.299 }, 00:27:42.299 { 00:27:42.299 "name": "BaseBdev2", 00:27:42.299 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:42.299 "is_configured": true, 00:27:42.299 "data_offset": 256, 00:27:42.299 "data_size": 7936 00:27:42.299 } 00:27:42.299 ] 00:27:42.299 }' 00:27:42.299 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.299 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:42.299 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.558 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:42.558 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:42.558 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:42.817 [2024-07-25 12:09:28.862584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:42.817 [2024-07-25 12:09:28.862631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:42.817 [2024-07-25 12:09:28.862648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13267d0 00:27:42.817 [2024-07-25 12:09:28.862660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:42.817 [2024-07-25 12:09:28.862806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:42.817 [2024-07-25 12:09:28.862821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:42.817 [2024-07-25 12:09:28.862860] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:42.817 [2024-07-25 12:09:28.862871] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:42.817 [2024-07-25 12:09:28.862880] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:42.817 BaseBdev1 00:27:42.817 12:09:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.820 12:09:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.079 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:44.079 "name": "raid_bdev1", 00:27:44.079 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:44.079 "strip_size_kb": 0, 00:27:44.079 "state": "online", 00:27:44.079 "raid_level": "raid1", 00:27:44.079 "superblock": true, 00:27:44.079 "num_base_bdevs": 2, 00:27:44.079 "num_base_bdevs_discovered": 1, 00:27:44.079 "num_base_bdevs_operational": 1, 00:27:44.079 "base_bdevs_list": [ 00:27:44.079 { 00:27:44.079 "name": null, 00:27:44.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.079 "is_configured": false, 00:27:44.079 "data_offset": 256, 00:27:44.079 "data_size": 7936 00:27:44.079 }, 00:27:44.079 { 00:27:44.079 "name": "BaseBdev2", 00:27:44.079 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:44.079 "is_configured": true, 00:27:44.079 "data_offset": 256, 00:27:44.079 "data_size": 7936 00:27:44.079 } 00:27:44.079 ] 00:27:44.079 }' 00:27:44.079 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:44.079 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.644 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.901 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.901 "name": "raid_bdev1", 00:27:44.901 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:44.901 "strip_size_kb": 0, 00:27:44.901 "state": "online", 00:27:44.901 "raid_level": "raid1", 00:27:44.901 "superblock": true, 00:27:44.901 "num_base_bdevs": 2, 00:27:44.901 "num_base_bdevs_discovered": 1, 00:27:44.901 "num_base_bdevs_operational": 1, 00:27:44.901 "base_bdevs_list": [ 00:27:44.901 { 00:27:44.901 "name": null, 00:27:44.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.901 "is_configured": false, 00:27:44.901 "data_offset": 256, 00:27:44.901 "data_size": 7936 00:27:44.901 }, 00:27:44.901 { 00:27:44.901 "name": "BaseBdev2", 00:27:44.901 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:44.901 "is_configured": true, 00:27:44.901 "data_offset": 256, 00:27:44.901 "data_size": 7936 00:27:44.901 } 00:27:44.901 ] 00:27:44.901 }' 00:27:44.901 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.901 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:44.901 12:09:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:44.901 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:45.159 [2024-07-25 12:09:31.224828] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:45.159 [2024-07-25 12:09:31.224935] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:45.159 [2024-07-25 12:09:31.224949] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:45.159 request: 00:27:45.159 { 00:27:45.159 "base_bdev": "BaseBdev1", 00:27:45.159 "raid_bdev": "raid_bdev1", 00:27:45.159 "method": "bdev_raid_add_base_bdev", 00:27:45.159 "req_id": 1 00:27:45.159 } 00:27:45.159 Got JSON-RPC error response 00:27:45.159 response: 00:27:45.159 { 00:27:45.159 "code": -22, 00:27:45.159 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:45.159 } 00:27:45.159 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:27:45.159 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.159 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.159 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.159 12:09:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:46.536 "name": "raid_bdev1", 00:27:46.536 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:46.536 "strip_size_kb": 0, 00:27:46.536 "state": "online", 00:27:46.536 "raid_level": "raid1", 00:27:46.536 "superblock": true, 00:27:46.536 "num_base_bdevs": 2, 00:27:46.536 "num_base_bdevs_discovered": 1, 00:27:46.536 "num_base_bdevs_operational": 1, 00:27:46.536 "base_bdevs_list": [ 00:27:46.536 { 00:27:46.536 "name": null, 00:27:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.536 "is_configured": false, 00:27:46.536 "data_offset": 256, 00:27:46.536 "data_size": 7936 00:27:46.536 }, 00:27:46.536 { 00:27:46.536 "name": "BaseBdev2", 00:27:46.536 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:46.536 "is_configured": true, 00:27:46.536 "data_offset": 256, 00:27:46.536 "data_size": 7936 00:27:46.536 } 00:27:46.536 ] 00:27:46.536 }' 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:46.536 12:09:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.101 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.361 "name": "raid_bdev1", 00:27:47.361 "uuid": "8a064ecb-826f-4277-b85f-292832baefbe", 00:27:47.361 "strip_size_kb": 0, 00:27:47.361 "state": "online", 00:27:47.361 "raid_level": "raid1", 00:27:47.361 "superblock": true, 00:27:47.361 "num_base_bdevs": 2, 00:27:47.361 "num_base_bdevs_discovered": 1, 00:27:47.361 "num_base_bdevs_operational": 1, 00:27:47.361 "base_bdevs_list": [ 00:27:47.361 { 00:27:47.361 "name": null, 00:27:47.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.361 "is_configured": false, 00:27:47.361 "data_offset": 256, 00:27:47.361 "data_size": 7936 00:27:47.361 }, 00:27:47.361 { 00:27:47.361 "name": "BaseBdev2", 00:27:47.361 "uuid": "c017dc78-e07f-5a8e-a2e1-aaa24f2f4249", 00:27:47.361 "is_configured": true, 00:27:47.361 "data_offset": 256, 00:27:47.361 "data_size": 7936 00:27:47.361 } 00:27:47.361 ] 00:27:47.361 }' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 89192 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89192 ']' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89192 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89192 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89192' 00:27:47.361 killing process with pid 89192 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89192 00:27:47.361 Received shutdown signal, test time was about 60.000000 seconds 00:27:47.361 00:27:47.361 Latency(us) 00:27:47.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.361 =================================================================================================================== 00:27:47.361 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:47.361 [2024-07-25 12:09:33.440605] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.361 [2024-07-25 12:09:33.440682] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.361 [2024-07-25 12:09:33.440721] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.361 [2024-07-25 12:09:33.440732] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132ee10 name raid_bdev1, state offline 00:27:47.361 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89192 00:27:47.361 [2024-07-25 12:09:33.465514] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:47.621 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:27:47.621 00:27:47.621 real 0m27.750s 00:27:47.621 user 0m44.029s 00:27:47.621 sys 0m3.565s 00:27:47.621 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.621 12:09:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:47.621 ************************************ 00:27:47.621 END TEST raid_rebuild_test_sb_md_interleaved 00:27:47.621 ************************************ 00:27:47.621 12:09:33 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:27:47.621 12:09:33 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:27:47.621 12:09:33 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 89192 ']' 00:27:47.621 12:09:33 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 89192 00:27:47.880 12:09:33 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:27:47.880 00:27:47.880 real 17m27.954s 00:27:47.880 user 29m33.596s 00:27:47.880 sys 3m10.140s 00:27:47.880 12:09:33 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.880 12:09:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:47.880 ************************************ 00:27:47.880 END TEST bdev_raid 00:27:47.880 ************************************ 00:27:47.880 12:09:33 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:27:47.880 12:09:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:47.880 12:09:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.880 12:09:33 -- common/autotest_common.sh@10 -- # set +x 00:27:47.880 ************************************ 00:27:47.880 START TEST bdevperf_config 00:27:47.880 ************************************ 00:27:47.880 12:09:33 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:27:47.880 * Looking for test storage... 00:27:47.880 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:47.880 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:47.880 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:47.880 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:47.880 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:47.880 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:47.880 12:09:33 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:51.171 12:09:36 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 12:09:34.044320] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:51.171 [2024-07-25 12:09:34.044398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94368 ] 00:27:51.171 Using job config with 4 jobs 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:51.171 [2024-07-25 12:09:34.194462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.171 [2024-07-25 12:09:34.295817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.171 cpumask for '\''job0'\'' is too big 00:27:51.171 cpumask for '\''job1'\'' is too big 00:27:51.171 cpumask for '\''job2'\'' is too big 00:27:51.171 cpumask for '\''job3'\'' is too big 00:27:51.171 Running I/O for 2 seconds... 00:27:51.171 00:27:51.171 Latency(us) 00:27:51.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.171 Malloc0 : 2.01 25955.81 25.35 0.00 0.00 9852.79 1730.15 15099.49 00:27:51.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.171 Malloc0 : 2.02 25965.18 25.36 0.00 0.00 9828.27 1717.04 13369.34 00:27:51.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.171 Malloc0 : 2.02 25943.26 25.34 0.00 0.00 9816.13 1717.04 11639.19 00:27:51.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.171 Malloc0 : 2.02 25921.30 25.31 0.00 0.00 9804.36 1717.04 10223.62 00:27:51.171 =================================================================================================================== 00:27:51.171 Total : 103785.55 101.35 0.00 0.00 9825.35 1717.04 15099.49' 00:27:51.171 12:09:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 12:09:34.044320] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:51.171 [2024-07-25 12:09:34.044398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94368 ] 00:27:51.171 Using job config with 4 jobs 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:51.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:51.172 [2024-07-25 12:09:34.194462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.172 [2024-07-25 12:09:34.295817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.172 cpumask for '\''job0'\'' is too big 00:27:51.172 cpumask for '\''job1'\'' is too big 00:27:51.172 cpumask for '\''job2'\'' is too big 00:27:51.172 cpumask for '\''job3'\'' is too big 00:27:51.172 Running I/O for 2 seconds... 00:27:51.172 00:27:51.172 Latency(us) 00:27:51.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.172 Malloc0 : 2.01 25955.81 25.35 0.00 0.00 9852.79 1730.15 15099.49 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.172 Malloc0 : 2.02 25965.18 25.36 0.00 0.00 9828.27 1717.04 13369.34 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.172 Malloc0 : 2.02 25943.26 25.34 0.00 0.00 9816.13 1717.04 11639.19 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.172 Malloc0 : 2.02 25921.30 25.31 0.00 0.00 9804.36 1717.04 10223.62 00:27:51.172 =================================================================================================================== 00:27:51.172 Total : 103785.55 101.35 0.00 0.00 9825.35 1717.04 15099.49' 00:27:51.172 12:09:36 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 12:09:34.044320] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:51.172 [2024-07-25 12:09:34.044398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94368 ] 00:27:51.172 Using job config with 4 jobs 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:51.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.172 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:51.172 [2024-07-25 12:09:34.194462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.172 [2024-07-25 12:09:34.295817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.172 cpumask for '\''job0'\'' is too big 00:27:51.172 cpumask for '\''job1'\'' is too big 00:27:51.172 cpumask for '\''job2'\'' is too big 00:27:51.172 cpumask for '\''job3'\'' is too big 00:27:51.172 Running I/O for 2 seconds... 00:27:51.172 00:27:51.172 Latency(us) 00:27:51.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.172 Malloc0 : 2.01 25955.81 25.35 0.00 0.00 9852.79 1730.15 15099.49 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.172 Malloc0 : 2.02 25965.18 25.36 0.00 0.00 9828.27 1717.04 13369.34 00:27:51.172 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.173 Malloc0 : 2.02 25943.26 25.34 0.00 0.00 9816.13 1717.04 11639.19 00:27:51.173 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:51.173 Malloc0 : 2.02 25921.30 25.31 0.00 0.00 9804.36 1717.04 10223.62 00:27:51.173 =================================================================================================================== 00:27:51.173 Total : 103785.55 101.35 0.00 0.00 9825.35 1717.04 15099.49' 00:27:51.173 12:09:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:51.173 12:09:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:51.173 12:09:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:27:51.173 12:09:36 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:51.173 [2024-07-25 12:09:36.744998] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:51.173 [2024-07-25 12:09:36.745061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94690 ] 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:51.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.173 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:51.173 [2024-07-25 12:09:36.894437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.173 [2024-07-25 12:09:36.995373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.173 cpumask for 'job0' is too big 00:27:51.173 cpumask for 'job1' is too big 00:27:51.173 cpumask for 'job2' is too big 00:27:51.173 cpumask for 'job3' is too big 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:27:53.708 Running I/O for 2 seconds... 00:27:53.708 00:27:53.708 Latency(us) 00:27:53.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.708 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:53.708 Malloc0 : 2.02 25754.46 25.15 0.00 0.00 9934.32 1703.94 15204.35 00:27:53.708 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:53.708 Malloc0 : 2.02 25732.57 25.13 0.00 0.00 9921.42 1690.83 13526.63 00:27:53.708 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:53.708 Malloc0 : 2.02 25710.77 25.11 0.00 0.00 9908.67 1690.83 11796.48 00:27:53.708 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:53.708 Malloc0 : 2.02 25689.09 25.09 0.00 0.00 9896.13 1690.83 10276.04 00:27:53.708 =================================================================================================================== 00:27:53.708 Total : 102886.89 100.48 0.00 0.00 9915.14 1690.83 15204.35' 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.708 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.708 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.708 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.708 12:09:39 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:56.241 12:09:42 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 12:09:39.466146] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:56.241 [2024-07-25 12:09:39.466211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95173 ] 00:27:56.241 Using job config with 3 jobs 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:56.241 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.241 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:56.241 [2024-07-25 12:09:39.611265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.241 [2024-07-25 12:09:39.709878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.241 cpumask for '\''job0'\'' is too big 00:27:56.241 cpumask for '\''job1'\'' is too big 00:27:56.241 cpumask for '\''job2'\'' is too big 00:27:56.241 Running I/O for 2 seconds... 00:27:56.241 00:27:56.241 Latency(us) 00:27:56.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.241 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.241 Malloc0 : 2.01 34881.75 34.06 0.00 0.00 7327.98 1703.94 10852.76 00:27:56.241 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.242 Malloc0 : 2.01 34851.79 34.03 0.00 0.00 7318.64 1690.83 9122.61 00:27:56.242 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.242 Malloc0 : 2.02 34906.28 34.09 0.00 0.00 7292.41 865.08 7707.03 00:27:56.242 =================================================================================================================== 00:27:56.242 Total : 104639.82 102.19 0.00 0.00 7312.99 865.08 10852.76' 00:27:56.242 12:09:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 12:09:39.466146] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:56.242 [2024-07-25 12:09:39.466211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95173 ] 00:27:56.242 Using job config with 3 jobs 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:56.242 [2024-07-25 12:09:39.611265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.242 [2024-07-25 12:09:39.709878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.242 cpumask for '\''job0'\'' is too big 00:27:56.242 cpumask for '\''job1'\'' is too big 00:27:56.242 cpumask for '\''job2'\'' is too big 00:27:56.242 Running I/O for 2 seconds... 00:27:56.242 00:27:56.242 Latency(us) 00:27:56.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.242 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.242 Malloc0 : 2.01 34881.75 34.06 0.00 0.00 7327.98 1703.94 10852.76 00:27:56.242 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.242 Malloc0 : 2.01 34851.79 34.03 0.00 0.00 7318.64 1690.83 9122.61 00:27:56.242 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.242 Malloc0 : 2.02 34906.28 34.09 0.00 0.00 7292.41 865.08 7707.03 00:27:56.242 =================================================================================================================== 00:27:56.242 Total : 104639.82 102.19 0.00 0.00 7312.99 865.08 10852.76' 00:27:56.242 12:09:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 12:09:39.466146] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:56.242 [2024-07-25 12:09:39.466211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95173 ] 00:27:56.242 Using job config with 3 jobs 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:56.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.242 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:56.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.243 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:56.243 [2024-07-25 12:09:39.611265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.243 [2024-07-25 12:09:39.709878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.243 cpumask for '\''job0'\'' is too big 00:27:56.243 cpumask for '\''job1'\'' is too big 00:27:56.243 cpumask for '\''job2'\'' is too big 00:27:56.243 Running I/O for 2 seconds... 00:27:56.243 00:27:56.243 Latency(us) 00:27:56.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.243 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.243 Malloc0 : 2.01 34881.75 34.06 0.00 0.00 7327.98 1703.94 10852.76 00:27:56.243 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.243 Malloc0 : 2.01 34851.79 34.03 0.00 0.00 7318.64 1690.83 9122.61 00:27:56.243 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:56.243 Malloc0 : 2.02 34906.28 34.09 0.00 0.00 7292.41 865.08 7707.03 00:27:56.243 =================================================================================================================== 00:27:56.243 Total : 104639.82 102.19 0.00 0.00 7312.99 865.08 10852.76' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:56.243 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:56.243 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:56.243 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:56.243 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:56.243 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:56.243 12:09:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:58.788 12:09:44 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 12:09:42.199962] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:58.788 [2024-07-25 12:09:42.200026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95705 ] 00:27:58.788 Using job config with 4 jobs 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:58.788 [2024-07-25 12:09:42.352203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.788 [2024-07-25 12:09:42.456079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.788 cpumask for '\''job0'\'' is too big 00:27:58.788 cpumask for '\''job1'\'' is too big 00:27:58.788 cpumask for '\''job2'\'' is too big 00:27:58.788 cpumask for '\''job3'\'' is too big 00:27:58.788 Running I/O for 2 seconds... 00:27:58.788 00:27:58.788 Latency(us) 00:27:58.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.788 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc0 : 2.04 12831.21 12.53 0.00 0.00 19936.89 3512.73 30618.42 00:27:58.788 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc1 : 2.04 12820.11 12.52 0.00 0.00 19938.32 4299.16 30618.42 00:27:58.788 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc0 : 2.04 12809.33 12.51 0.00 0.00 19888.86 3512.73 27053.26 00:27:58.788 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc1 : 2.04 12798.23 12.50 0.00 0.00 19888.62 4299.16 27053.26 00:27:58.788 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc0 : 2.04 12787.49 12.49 0.00 0.00 19840.06 3460.30 23592.96 00:27:58.788 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc1 : 2.04 12776.52 12.48 0.00 0.00 19839.37 4299.16 23592.96 00:27:58.788 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc0 : 2.05 12765.71 12.47 0.00 0.00 19789.65 3486.52 20342.37 00:27:58.788 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.788 Malloc1 : 2.05 12754.80 12.46 0.00 0.00 19790.15 4272.95 20342.37 00:27:58.788 =================================================================================================================== 00:27:58.788 Total : 102343.41 99.94 0.00 0.00 19863.99 3460.30 30618.42' 00:27:58.788 12:09:44 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 12:09:42.199962] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:58.788 [2024-07-25 12:09:42.200026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95705 ] 00:27:58.788 Using job config with 4 jobs 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:58.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.788 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:58.789 [2024-07-25 12:09:42.352203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.789 [2024-07-25 12:09:42.456079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.789 cpumask for '\''job0'\'' is too big 00:27:58.789 cpumask for '\''job1'\'' is too big 00:27:58.789 cpumask for '\''job2'\'' is too big 00:27:58.789 cpumask for '\''job3'\'' is too big 00:27:58.789 Running I/O for 2 seconds... 00:27:58.789 00:27:58.789 Latency(us) 00:27:58.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.04 12831.21 12.53 0.00 0.00 19936.89 3512.73 30618.42 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.04 12820.11 12.52 0.00 0.00 19938.32 4299.16 30618.42 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.04 12809.33 12.51 0.00 0.00 19888.86 3512.73 27053.26 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.04 12798.23 12.50 0.00 0.00 19888.62 4299.16 27053.26 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.04 12787.49 12.49 0.00 0.00 19840.06 3460.30 23592.96 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.04 12776.52 12.48 0.00 0.00 19839.37 4299.16 23592.96 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.05 12765.71 12.47 0.00 0.00 19789.65 3486.52 20342.37 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.05 12754.80 12.46 0.00 0.00 19790.15 4272.95 20342.37 00:27:58.789 =================================================================================================================== 00:27:58.789 Total : 102343.41 99.94 0.00 0.00 19863.99 3460.30 30618.42' 00:27:58.789 12:09:44 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 12:09:42.199962] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:58.789 [2024-07-25 12:09:42.200026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95705 ] 00:27:58.789 Using job config with 4 jobs 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:58.789 qat_pci_device_allocate(): Reached maximum number 12:09:44 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:58.789 of QAT devices 00:27:58.789 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:58.789 [2024-07-25 12:09:42.352203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.789 [2024-07-25 12:09:42.456079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.789 cpumask for '\''job0'\'' is too big 00:27:58.789 cpumask for '\''job1'\'' is too big 00:27:58.789 cpumask for '\''job2'\'' is too big 00:27:58.789 cpumask for '\''job3'\'' is too big 00:27:58.789 Running I/O for 2 seconds... 00:27:58.789 00:27:58.789 Latency(us) 00:27:58.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.04 12831.21 12.53 0.00 0.00 19936.89 3512.73 30618.42 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.04 12820.11 12.52 0.00 0.00 19938.32 4299.16 30618.42 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.04 12809.33 12.51 0.00 0.00 19888.86 3512.73 27053.26 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.04 12798.23 12.50 0.00 0.00 19888.62 4299.16 27053.26 00:27:58.789 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc0 : 2.04 12787.49 12.49 0.00 0.00 19840.06 3460.30 23592.96 00:27:58.789 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.789 Malloc1 : 2.04 12776.52 12.48 0.00 0.00 19839.37 4299.16 23592.96 00:27:58.790 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.790 Malloc0 : 2.05 12765.71 12.47 0.00 0.00 19789.65 3486.52 20342.37 00:27:58.790 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:58.790 Malloc1 : 2.05 12754.80 12.46 0.00 0.00 19790.15 4272.95 20342.37 00:27:58.790 =================================================================================================================== 00:27:58.790 Total : 102343.41 99.94 0.00 0.00 19863.99 3460.30 30618.42' 00:27:58.790 12:09:44 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:58.790 12:09:44 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:27:58.790 12:09:44 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:27:58.790 12:09:44 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:58.790 12:09:44 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:58.790 00:27:58.790 real 0m11.055s 00:27:58.790 user 0m9.751s 00:27:58.790 sys 0m1.142s 00:27:58.790 12:09:44 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.790 12:09:44 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:27:58.790 ************************************ 00:27:58.790 END TEST bdevperf_config 00:27:58.790 ************************************ 00:27:59.050 12:09:44 -- spdk/autotest.sh@196 -- # uname -s 00:27:59.050 12:09:44 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:27:59.050 12:09:44 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:59.050 12:09:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:59.050 12:09:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.050 12:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:59.050 ************************************ 00:27:59.050 START TEST reactor_set_interrupt 00:27:59.050 ************************************ 00:27:59.050 12:09:44 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:59.050 * Looking for test storage... 00:27:59.050 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:59.050 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:27:59.050 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:27:59.050 12:09:45 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:59.050 12:09:45 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:27:59.050 12:09:45 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:27:59.050 12:09:45 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:27:59.051 12:09:45 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:27:59.051 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:27:59.051 12:09:45 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:59.051 #define SPDK_CONFIG_H 00:27:59.051 #define SPDK_CONFIG_APPS 1 00:27:59.051 #define SPDK_CONFIG_ARCH native 00:27:59.051 #undef SPDK_CONFIG_ASAN 00:27:59.051 #undef SPDK_CONFIG_AVAHI 00:27:59.051 #undef SPDK_CONFIG_CET 00:27:59.051 #define SPDK_CONFIG_COVERAGE 1 00:27:59.051 #define SPDK_CONFIG_CROSS_PREFIX 00:27:59.051 #define SPDK_CONFIG_CRYPTO 1 00:27:59.051 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:27:59.051 #undef SPDK_CONFIG_CUSTOMOCF 00:27:59.051 #undef SPDK_CONFIG_DAOS 00:27:59.051 #define SPDK_CONFIG_DAOS_DIR 00:27:59.051 #define SPDK_CONFIG_DEBUG 1 00:27:59.051 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:27:59.051 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:27:59.051 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:59.051 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:59.051 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:59.051 #undef SPDK_CONFIG_DPDK_UADK 00:27:59.051 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:27:59.051 #define SPDK_CONFIG_EXAMPLES 1 00:27:59.051 #undef SPDK_CONFIG_FC 00:27:59.051 #define SPDK_CONFIG_FC_PATH 00:27:59.051 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:59.051 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:59.051 #undef SPDK_CONFIG_FUSE 00:27:59.051 #undef SPDK_CONFIG_FUZZER 00:27:59.051 #define SPDK_CONFIG_FUZZER_LIB 00:27:59.051 #undef SPDK_CONFIG_GOLANG 00:27:59.051 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:27:59.051 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:27:59.051 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:59.051 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:27:59.051 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:59.051 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:59.051 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:59.051 #define SPDK_CONFIG_IDXD 1 00:27:59.051 #define SPDK_CONFIG_IDXD_KERNEL 1 00:27:59.051 #define SPDK_CONFIG_IPSEC_MB 1 00:27:59.052 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:27:59.052 #define SPDK_CONFIG_ISAL 1 00:27:59.052 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:59.052 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:59.052 #define SPDK_CONFIG_LIBDIR 00:27:59.052 #undef SPDK_CONFIG_LTO 00:27:59.052 #define SPDK_CONFIG_MAX_LCORES 128 00:27:59.052 #define SPDK_CONFIG_NVME_CUSE 1 00:27:59.052 #undef SPDK_CONFIG_OCF 00:27:59.052 #define SPDK_CONFIG_OCF_PATH 00:27:59.052 #define SPDK_CONFIG_OPENSSL_PATH 00:27:59.052 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:59.052 #define SPDK_CONFIG_PGO_DIR 00:27:59.052 #undef SPDK_CONFIG_PGO_USE 00:27:59.052 #define SPDK_CONFIG_PREFIX /usr/local 00:27:59.052 #undef SPDK_CONFIG_RAID5F 00:27:59.052 #undef SPDK_CONFIG_RBD 00:27:59.052 #define SPDK_CONFIG_RDMA 1 00:27:59.052 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:59.052 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:59.052 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:59.052 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:59.052 #define SPDK_CONFIG_SHARED 1 00:27:59.052 #undef SPDK_CONFIG_SMA 00:27:59.052 #define SPDK_CONFIG_TESTS 1 00:27:59.052 #undef SPDK_CONFIG_TSAN 00:27:59.052 #define SPDK_CONFIG_UBLK 1 00:27:59.052 #define SPDK_CONFIG_UBSAN 1 00:27:59.052 #undef SPDK_CONFIG_UNIT_TESTS 00:27:59.052 #undef SPDK_CONFIG_URING 00:27:59.052 #define SPDK_CONFIG_URING_PATH 00:27:59.052 #undef SPDK_CONFIG_URING_ZNS 00:27:59.052 #undef SPDK_CONFIG_USDT 00:27:59.052 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:27:59.052 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:27:59.052 #undef SPDK_CONFIG_VFIO_USER 00:27:59.052 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:59.052 #define SPDK_CONFIG_VHOST 1 00:27:59.052 #define SPDK_CONFIG_VIRTIO 1 00:27:59.052 #undef SPDK_CONFIG_VTUNE 00:27:59.052 #define SPDK_CONFIG_VTUNE_DIR 00:27:59.052 #define SPDK_CONFIG_WERROR 1 00:27:59.052 #define SPDK_CONFIG_WPDK_DIR 00:27:59.052 #undef SPDK_CONFIG_XNVME 00:27:59.052 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:59.052 12:09:45 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:59.052 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:27:59.052 12:09:45 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.052 12:09:45 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.052 12:09:45 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.052 12:09:45 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.052 12:09:45 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.052 12:09:45 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.052 12:09:45 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:27:59.052 12:09:45 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.052 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:27:59.052 12:09:45 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:27:59.052 12:09:45 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:27:59.052 12:09:45 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:27:59.052 12:09:45 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:27:59.313 12:09:45 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:27:59.314 12:09:45 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:27:59.314 12:09:45 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:27:59.314 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 96256 ]] 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 96256 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.EwyzCt 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.EwyzCt/tests/interrupt /tmp/spdk.EwyzCt 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:27:59.315 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=55101546496 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6640758784 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338663424 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9797632 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30870003712 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1150976 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:27:59.316 * Looking for test storage... 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=55101546496 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=8855351296 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.316 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=96314 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 96314 /var/tmp/spdk.sock 00:27:59.316 12:09:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 96314 ']' 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.316 12:09:45 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:59.316 [2024-07-25 12:09:45.316064] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:27:59.316 [2024-07-25 12:09:45.316123] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96314 ] 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.316 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:59.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:59.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.317 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:59.576 [2024-07-25 12:09:45.449711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.576 [2024-07-25 12:09:45.537076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.576 [2024-07-25 12:09:45.537174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.576 [2024-07-25 12:09:45.537179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.576 [2024-07-25 12:09:45.605059] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.145 12:09:46 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.145 12:09:46 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:28:00.145 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:28:00.145 12:09:46 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:00.426 Malloc0 00:28:00.426 Malloc1 00:28:00.426 Malloc2 00:28:00.426 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:28:00.426 12:09:46 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:28:00.426 12:09:46 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:00.426 12:09:46 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:00.715 5000+0 records in 00:28:00.715 5000+0 records out 00:28:00.715 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0245616 s, 417 MB/s 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:00.715 AIO0 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 96314 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 96314 without_thd 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=96314 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:00.715 12:09:46 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:00.975 12:09:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:01.235 spdk_thread ids are 1 on reactor0. 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 96314 0 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 96314 0 idle 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:01.235 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96314 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0' 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96314 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 96314 1 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 96314 1 idle 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.494 12:09:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.495 12:09:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.495 12:09:47 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.495 12:09:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.495 12:09:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.495 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:01.495 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:28:01.754 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96317 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:28:01.754 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96317 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 96314 2 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 96314 2 idle 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96318 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96318 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:28:01.755 12:09:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:28:02.015 [2024-07-25 12:09:48.038101] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:02.015 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:02.274 [2024-07-25 12:09:48.277838] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:02.274 [2024-07-25 12:09:48.278156] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:02.274 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:02.534 [2024-07-25 12:09:48.497740] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:02.534 [2024-07-25 12:09:48.497908] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 96314 0 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 96314 0 busy 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:02.534 12:09:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96314 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.77 reactor_0' 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96314 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.77 reactor_0 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 96314 2 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 96314 2 busy 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96318 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.35 reactor_2' 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96318 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.35 reactor_2 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.793 12:09:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:03.053 [2024-07-25 12:09:49.085741] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:03.053 [2024-07-25 12:09:49.085839] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 96314 2 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 96314 2 idle 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:03.053 12:09:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96318 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2' 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96318 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:03.312 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:03.571 [2024-07-25 12:09:49.489741] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:03.571 [2024-07-25 12:09:49.489863] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:03.571 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:28:03.571 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:28:03.571 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:28:03.830 [2024-07-25 12:09:49.718077] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 96314 0 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 96314 0 idle 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=96314 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:03.830 12:09:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 96314 -w 256 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 96314 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.58 reactor_0' 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 96314 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.58 reactor_0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:28:03.831 12:09:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 96314 00:28:03.831 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 96314 ']' 00:28:03.831 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 96314 00:28:03.831 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:28:03.831 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.831 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96314 00:28:04.090 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:04.090 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:04.090 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96314' 00:28:04.090 killing process with pid 96314 00:28:04.091 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 96314 00:28:04.091 12:09:49 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 96314 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=97185 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:04.091 12:09:50 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 97185 /var/tmp/spdk.sock 00:28:04.091 12:09:50 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 97185 ']' 00:28:04.091 12:09:50 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.091 12:09:50 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.091 12:09:50 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.091 12:09:50 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.091 12:09:50 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:04.350 [2024-07-25 12:09:50.225574] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:04.350 [2024-07-25 12:09:50.225638] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97185 ] 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:04.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.350 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:04.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.351 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:04.351 [2024-07-25 12:09:50.359694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.351 [2024-07-25 12:09:50.447280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.351 [2024-07-25 12:09:50.447371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.351 [2024-07-25 12:09:50.447377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.610 [2024-07-25 12:09:50.516161] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:05.179 12:09:51 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.179 12:09:51 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:28:05.179 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:28:05.179 12:09:51 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:05.439 Malloc0 00:28:05.439 Malloc1 00:28:05.439 Malloc2 00:28:05.439 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:28:05.439 12:09:51 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:28:05.439 12:09:51 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:05.439 12:09:51 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:05.439 5000+0 records in 00:28:05.439 5000+0 records out 00:28:05.439 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0267864 s, 382 MB/s 00:28:05.439 12:09:51 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:05.698 AIO0 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 97185 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 97185 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=97185 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:05.698 12:09:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:05.958 12:09:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:06.217 spdk_thread ids are 1 on reactor0. 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97185 0 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 97185 0 idle 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:06.217 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97185 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0' 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97185 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97185 1 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 97185 1 idle 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97190 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97190 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97185 2 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 97185 2 idle 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:06.477 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97191 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97191 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:28:06.736 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:06.995 [2024-07-25 12:09:52.867918] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:06.995 [2024-07-25 12:09:52.868111] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:28:06.995 [2024-07-25 12:09:52.868202] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:06.995 12:09:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:06.995 [2024-07-25 12:09:53.100422] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:06.995 [2024-07-25 12:09:53.100595] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 97185 0 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 97185 0 busy 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97185 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.79 reactor_0' 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97185 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.79 reactor_0 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 97185 2 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 97185 2 busy 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:07.254 12:09:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:07.255 12:09:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:07.255 12:09:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:07.255 12:09:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:07.255 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:07.255 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97191 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2' 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97191 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:07.513 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:07.772 [2024-07-25 12:09:53.690305] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:07.772 [2024-07-25 12:09:53.690409] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 97185 2 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 97185 2 idle 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97191 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2' 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97191 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:07.772 12:09:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:08.031 [2024-07-25 12:09:54.099350] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:08.031 [2024-07-25 12:09:54.099514] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:28:08.031 [2024-07-25 12:09:54.099540] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 97185 0 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 97185 0 idle 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=97185 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:08.031 12:09:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 97185 -w 256 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 97185 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.60 reactor_0' 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 97185 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.60 reactor_0 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:08.291 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 97185 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 97185 ']' 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 97185 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97185 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97185' 00:28:08.291 killing process with pid 97185 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 97185 00:28:08.291 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 97185 00:28:08.550 12:09:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:28:08.550 12:09:54 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:08.550 00:28:08.550 real 0m9.588s 00:28:08.550 user 0m8.910s 00:28:08.550 sys 0m2.056s 00:28:08.550 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.550 12:09:54 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 ************************************ 00:28:08.550 END TEST reactor_set_interrupt 00:28:08.550 ************************************ 00:28:08.550 12:09:54 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:08.550 12:09:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:08.551 12:09:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:08.551 12:09:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.551 ************************************ 00:28:08.551 START TEST reap_unregistered_poller 00:28:08.551 ************************************ 00:28:08.551 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:08.812 * Looking for test storage... 00:28:08.812 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:08.812 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:28:08.812 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:28:08.812 12:09:54 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:28:08.813 12:09:54 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:28:08.813 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:28:08.813 12:09:54 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:28:08.813 #define SPDK_CONFIG_H 00:28:08.813 #define SPDK_CONFIG_APPS 1 00:28:08.813 #define SPDK_CONFIG_ARCH native 00:28:08.813 #undef SPDK_CONFIG_ASAN 00:28:08.813 #undef SPDK_CONFIG_AVAHI 00:28:08.813 #undef SPDK_CONFIG_CET 00:28:08.813 #define SPDK_CONFIG_COVERAGE 1 00:28:08.813 #define SPDK_CONFIG_CROSS_PREFIX 00:28:08.813 #define SPDK_CONFIG_CRYPTO 1 00:28:08.813 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:28:08.813 #undef SPDK_CONFIG_CUSTOMOCF 00:28:08.813 #undef SPDK_CONFIG_DAOS 00:28:08.813 #define SPDK_CONFIG_DAOS_DIR 00:28:08.813 #define SPDK_CONFIG_DEBUG 1 00:28:08.813 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:28:08.813 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:08.813 #define SPDK_CONFIG_DPDK_INC_DIR 00:28:08.814 #define SPDK_CONFIG_DPDK_LIB_DIR 00:28:08.814 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:28:08.814 #undef SPDK_CONFIG_DPDK_UADK 00:28:08.814 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:08.814 #define SPDK_CONFIG_EXAMPLES 1 00:28:08.814 #undef SPDK_CONFIG_FC 00:28:08.814 #define SPDK_CONFIG_FC_PATH 00:28:08.814 #define SPDK_CONFIG_FIO_PLUGIN 1 00:28:08.814 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:28:08.814 #undef SPDK_CONFIG_FUSE 00:28:08.814 #undef SPDK_CONFIG_FUZZER 00:28:08.814 #define SPDK_CONFIG_FUZZER_LIB 00:28:08.814 #undef SPDK_CONFIG_GOLANG 00:28:08.814 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:28:08.814 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:28:08.814 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:28:08.814 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:28:08.814 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:28:08.814 #undef SPDK_CONFIG_HAVE_LIBBSD 00:28:08.814 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:28:08.814 #define SPDK_CONFIG_IDXD 1 00:28:08.814 #define SPDK_CONFIG_IDXD_KERNEL 1 00:28:08.814 #define SPDK_CONFIG_IPSEC_MB 1 00:28:08.814 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:08.814 #define SPDK_CONFIG_ISAL 1 00:28:08.814 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:28:08.814 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:28:08.814 #define SPDK_CONFIG_LIBDIR 00:28:08.814 #undef SPDK_CONFIG_LTO 00:28:08.814 #define SPDK_CONFIG_MAX_LCORES 128 00:28:08.814 #define SPDK_CONFIG_NVME_CUSE 1 00:28:08.814 #undef SPDK_CONFIG_OCF 00:28:08.814 #define SPDK_CONFIG_OCF_PATH 00:28:08.814 #define SPDK_CONFIG_OPENSSL_PATH 00:28:08.814 #undef SPDK_CONFIG_PGO_CAPTURE 00:28:08.814 #define SPDK_CONFIG_PGO_DIR 00:28:08.814 #undef SPDK_CONFIG_PGO_USE 00:28:08.814 #define SPDK_CONFIG_PREFIX /usr/local 00:28:08.814 #undef SPDK_CONFIG_RAID5F 00:28:08.814 #undef SPDK_CONFIG_RBD 00:28:08.814 #define SPDK_CONFIG_RDMA 1 00:28:08.814 #define SPDK_CONFIG_RDMA_PROV verbs 00:28:08.814 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:28:08.814 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:28:08.814 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:28:08.814 #define SPDK_CONFIG_SHARED 1 00:28:08.814 #undef SPDK_CONFIG_SMA 00:28:08.814 #define SPDK_CONFIG_TESTS 1 00:28:08.814 #undef SPDK_CONFIG_TSAN 00:28:08.814 #define SPDK_CONFIG_UBLK 1 00:28:08.814 #define SPDK_CONFIG_UBSAN 1 00:28:08.814 #undef SPDK_CONFIG_UNIT_TESTS 00:28:08.814 #undef SPDK_CONFIG_URING 00:28:08.814 #define SPDK_CONFIG_URING_PATH 00:28:08.814 #undef SPDK_CONFIG_URING_ZNS 00:28:08.814 #undef SPDK_CONFIG_USDT 00:28:08.814 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:28:08.814 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:28:08.814 #undef SPDK_CONFIG_VFIO_USER 00:28:08.814 #define SPDK_CONFIG_VFIO_USER_DIR 00:28:08.814 #define SPDK_CONFIG_VHOST 1 00:28:08.814 #define SPDK_CONFIG_VIRTIO 1 00:28:08.814 #undef SPDK_CONFIG_VTUNE 00:28:08.814 #define SPDK_CONFIG_VTUNE_DIR 00:28:08.814 #define SPDK_CONFIG_WERROR 1 00:28:08.814 #define SPDK_CONFIG_WPDK_DIR 00:28:08.814 #undef SPDK_CONFIG_XNVME 00:28:08.814 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:28:08.814 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:08.814 12:09:54 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.814 12:09:54 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.814 12:09:54 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.814 12:09:54 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.814 12:09:54 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:28:08.814 12:09:54 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.814 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:28:08.814 12:09:54 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:28:08.814 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:28:08.814 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:28:08.815 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 98076 ]] 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 98076 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:28:08.816 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.6iuOvj 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.6iuOvj/tests/interrupt /tmp/spdk.6iuOvj 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=55101358080 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6640947200 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338663424 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9797632 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30870003712 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1150976 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:28:08.817 * Looking for test storage... 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=55101358080 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=8855539712 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.817 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:28:08.817 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=98123 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 98123 /var/tmp/spdk.sock 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 98123 ']' 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.818 12:09:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:08.818 12:09:54 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:09.078 [2024-07-25 12:09:54.930473] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:09.078 [2024-07-25 12:09:54.930536] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98123 ] 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:09.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.078 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:09.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.079 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:09.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.079 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:09.079 [2024-07-25 12:09:55.060273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:09.079 [2024-07-25 12:09:55.149617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.079 [2024-07-25 12:09:55.149711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.079 [2024-07-25 12:09:55.149714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.338 [2024-07-25 12:09:55.218615] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:09.906 12:09:55 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.906 12:09:55 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:28:09.906 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:28:09.906 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:28:09.906 12:09:55 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.906 12:09:55 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:09.906 12:09:55 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.906 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:28:09.906 "name": "app_thread", 00:28:09.906 "id": 1, 00:28:09.906 "active_pollers": [], 00:28:09.906 "timed_pollers": [ 00:28:09.906 { 00:28:09.906 "name": "rpc_subsystem_poll_servers", 00:28:09.906 "id": 1, 00:28:09.906 "state": "waiting", 00:28:09.906 "run_count": 0, 00:28:09.906 "busy_count": 0, 00:28:09.906 "period_ticks": 10000000 00:28:09.906 } 00:28:09.906 ], 00:28:09.906 "paused_pollers": [] 00:28:09.906 }' 00:28:09.906 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:09.907 5000+0 records in 00:28:09.907 5000+0 records out 00:28:09.907 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0259244 s, 395 MB/s 00:28:09.907 12:09:55 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:10.166 AIO0 00:28:10.166 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:10.425 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:28:10.425 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:28:10.425 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:28:10.425 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.425 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:10.684 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.684 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:28:10.684 "name": "app_thread", 00:28:10.684 "id": 1, 00:28:10.684 "active_pollers": [], 00:28:10.684 "timed_pollers": [ 00:28:10.684 { 00:28:10.684 "name": "rpc_subsystem_poll_servers", 00:28:10.684 "id": 1, 00:28:10.684 "state": "waiting", 00:28:10.684 "run_count": 0, 00:28:10.684 "busy_count": 0, 00:28:10.684 "period_ticks": 10000000 00:28:10.684 } 00:28:10.684 ], 00:28:10.684 "paused_pollers": [] 00:28:10.684 }' 00:28:10.684 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:10.685 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 98123 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 98123 ']' 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 98123 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98123 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98123' 00:28:10.685 killing process with pid 98123 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 98123 00:28:10.685 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 98123 00:28:10.943 12:09:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:28:10.943 12:09:56 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:10.943 00:28:10.943 real 0m2.277s 00:28:10.943 user 0m1.378s 00:28:10.943 sys 0m0.621s 00:28:10.943 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.943 12:09:56 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:10.943 ************************************ 00:28:10.943 END TEST reap_unregistered_poller 00:28:10.943 ************************************ 00:28:10.943 12:09:56 -- spdk/autotest.sh@202 -- # uname -s 00:28:10.943 12:09:56 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:28:10.943 12:09:56 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:28:10.943 12:09:56 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:28:10.943 12:09:56 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:28:10.943 12:09:56 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:56 -- spdk/autotest.sh@264 -- # timing_exit lib 00:28:10.944 12:09:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.944 12:09:56 -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 12:09:57 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:28:10.944 12:09:57 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:28:10.944 12:09:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:10.944 12:09:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.944 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:28:11.203 ************************************ 00:28:11.203 START TEST compress_compdev 00:28:11.203 ************************************ 00:28:11.203 12:09:57 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:28:11.203 * Looking for test storage... 00:28:11.203 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:11.203 12:09:57 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.203 12:09:57 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.203 12:09:57 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.203 12:09:57 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.203 12:09:57 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.203 12:09:57 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.203 12:09:57 compress_compdev -- paths/export.sh@5 -- # export PATH 00:28:11.203 12:09:57 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.203 12:09:57 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=98489 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:11.203 12:09:57 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 98489 00:28:11.203 12:09:57 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 98489 ']' 00:28:11.203 12:09:57 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.204 12:09:57 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.204 12:09:57 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.204 12:09:57 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.204 12:09:57 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:11.204 12:09:57 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:11.204 [2024-07-25 12:09:57.264181] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:11.204 [2024-07-25 12:09:57.264241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98489 ] 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:11.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.463 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:11.463 [2024-07-25 12:09:57.385914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:11.463 [2024-07-25 12:09:57.473184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.463 [2024-07-25 12:09:57.473190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.030 [2024-07-25 12:09:58.148036] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:12.289 12:09:58 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.289 12:09:58 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:12.289 12:09:58 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:28:12.289 12:09:58 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:12.289 12:09:58 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:15.632 [2024-07-25 12:10:01.295701] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1291f00 PMD being used: compress_qat 00:28:15.632 12:10:01 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:15.632 12:10:01 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:15.890 [ 00:28:15.890 { 00:28:15.890 "name": "Nvme0n1", 00:28:15.890 "aliases": [ 00:28:15.890 "be52d514-6733-4a2c-b4d3-efcbc0e69ac4" 00:28:15.890 ], 00:28:15.890 "product_name": "NVMe disk", 00:28:15.890 "block_size": 512, 00:28:15.890 "num_blocks": 3907029168, 00:28:15.890 "uuid": "be52d514-6733-4a2c-b4d3-efcbc0e69ac4", 00:28:15.890 "assigned_rate_limits": { 00:28:15.890 "rw_ios_per_sec": 0, 00:28:15.890 "rw_mbytes_per_sec": 0, 00:28:15.890 "r_mbytes_per_sec": 0, 00:28:15.890 "w_mbytes_per_sec": 0 00:28:15.890 }, 00:28:15.890 "claimed": false, 00:28:15.890 "zoned": false, 00:28:15.890 "supported_io_types": { 00:28:15.890 "read": true, 00:28:15.890 "write": true, 00:28:15.890 "unmap": true, 00:28:15.890 "flush": true, 00:28:15.890 "reset": true, 00:28:15.890 "nvme_admin": true, 00:28:15.890 "nvme_io": true, 00:28:15.890 "nvme_io_md": false, 00:28:15.890 "write_zeroes": true, 00:28:15.890 "zcopy": false, 00:28:15.890 "get_zone_info": false, 00:28:15.890 "zone_management": false, 00:28:15.890 "zone_append": false, 00:28:15.890 "compare": false, 00:28:15.890 "compare_and_write": false, 00:28:15.890 "abort": true, 00:28:15.890 "seek_hole": false, 00:28:15.890 "seek_data": false, 00:28:15.890 "copy": false, 00:28:15.890 "nvme_iov_md": false 00:28:15.890 }, 00:28:15.890 "driver_specific": { 00:28:15.890 "nvme": [ 00:28:15.890 { 00:28:15.890 "pci_address": "0000:d8:00.0", 00:28:15.890 "trid": { 00:28:15.890 "trtype": "PCIe", 00:28:15.890 "traddr": "0000:d8:00.0" 00:28:15.890 }, 00:28:15.890 "ctrlr_data": { 00:28:15.890 "cntlid": 0, 00:28:15.890 "vendor_id": "0x8086", 00:28:15.890 "model_number": "INTEL SSDPE2KX020T8", 00:28:15.890 "serial_number": "BTLJ125505KA2P0BGN", 00:28:15.890 "firmware_revision": "VDV10170", 00:28:15.890 "oacs": { 00:28:15.890 "security": 0, 00:28:15.890 "format": 1, 00:28:15.890 "firmware": 1, 00:28:15.890 "ns_manage": 1 00:28:15.890 }, 00:28:15.890 "multi_ctrlr": false, 00:28:15.890 "ana_reporting": false 00:28:15.890 }, 00:28:15.890 "vs": { 00:28:15.890 "nvme_version": "1.2" 00:28:15.890 }, 00:28:15.890 "ns_data": { 00:28:15.890 "id": 1, 00:28:15.890 "can_share": false 00:28:15.890 } 00:28:15.890 } 00:28:15.890 ], 00:28:15.890 "mp_policy": "active_passive" 00:28:15.890 } 00:28:15.890 } 00:28:15.890 ] 00:28:15.890 12:10:01 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:15.890 12:10:01 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:16.148 [2024-07-25 12:10:02.016608] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10c9140 PMD being used: compress_qat 00:28:17.084 812d16f3-4efa-4b1c-967f-6469e118e047 00:28:17.084 12:10:03 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:17.351 0d15c9dd-1356-4c11-bad7-61ae3c8bacfd 00:28:17.351 12:10:03 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:17.352 12:10:03 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:17.352 12:10:03 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:17.352 12:10:03 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:17.352 12:10:03 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:17.352 12:10:03 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:17.352 12:10:03 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:17.614 12:10:03 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:17.614 [ 00:28:17.614 { 00:28:17.614 "name": "0d15c9dd-1356-4c11-bad7-61ae3c8bacfd", 00:28:17.614 "aliases": [ 00:28:17.614 "lvs0/lv0" 00:28:17.614 ], 00:28:17.614 "product_name": "Logical Volume", 00:28:17.614 "block_size": 512, 00:28:17.614 "num_blocks": 204800, 00:28:17.614 "uuid": "0d15c9dd-1356-4c11-bad7-61ae3c8bacfd", 00:28:17.614 "assigned_rate_limits": { 00:28:17.614 "rw_ios_per_sec": 0, 00:28:17.614 "rw_mbytes_per_sec": 0, 00:28:17.614 "r_mbytes_per_sec": 0, 00:28:17.614 "w_mbytes_per_sec": 0 00:28:17.614 }, 00:28:17.614 "claimed": false, 00:28:17.614 "zoned": false, 00:28:17.614 "supported_io_types": { 00:28:17.614 "read": true, 00:28:17.614 "write": true, 00:28:17.614 "unmap": true, 00:28:17.614 "flush": false, 00:28:17.614 "reset": true, 00:28:17.614 "nvme_admin": false, 00:28:17.614 "nvme_io": false, 00:28:17.614 "nvme_io_md": false, 00:28:17.614 "write_zeroes": true, 00:28:17.614 "zcopy": false, 00:28:17.614 "get_zone_info": false, 00:28:17.614 "zone_management": false, 00:28:17.614 "zone_append": false, 00:28:17.614 "compare": false, 00:28:17.614 "compare_and_write": false, 00:28:17.614 "abort": false, 00:28:17.614 "seek_hole": true, 00:28:17.614 "seek_data": true, 00:28:17.614 "copy": false, 00:28:17.614 "nvme_iov_md": false 00:28:17.614 }, 00:28:17.614 "driver_specific": { 00:28:17.614 "lvol": { 00:28:17.614 "lvol_store_uuid": "812d16f3-4efa-4b1c-967f-6469e118e047", 00:28:17.614 "base_bdev": "Nvme0n1", 00:28:17.614 "thin_provision": true, 00:28:17.614 "num_allocated_clusters": 0, 00:28:17.614 "snapshot": false, 00:28:17.614 "clone": false, 00:28:17.614 "esnap_clone": false 00:28:17.614 } 00:28:17.614 } 00:28:17.614 } 00:28:17.614 ] 00:28:17.614 12:10:03 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:17.614 12:10:03 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:17.614 12:10:03 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:17.872 [2024-07-25 12:10:03.891828] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:17.872 COMP_lvs0/lv0 00:28:17.872 12:10:03 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:17.872 12:10:03 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:17.872 12:10:03 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:17.872 12:10:03 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:17.872 12:10:03 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:17.872 12:10:03 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:17.872 12:10:03 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:18.131 12:10:04 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:18.390 [ 00:28:18.390 { 00:28:18.390 "name": "COMP_lvs0/lv0", 00:28:18.390 "aliases": [ 00:28:18.390 "e8b1a170-ec09-58bb-8c7d-6031c7f3633d" 00:28:18.390 ], 00:28:18.390 "product_name": "compress", 00:28:18.390 "block_size": 512, 00:28:18.390 "num_blocks": 200704, 00:28:18.390 "uuid": "e8b1a170-ec09-58bb-8c7d-6031c7f3633d", 00:28:18.390 "assigned_rate_limits": { 00:28:18.390 "rw_ios_per_sec": 0, 00:28:18.390 "rw_mbytes_per_sec": 0, 00:28:18.390 "r_mbytes_per_sec": 0, 00:28:18.390 "w_mbytes_per_sec": 0 00:28:18.390 }, 00:28:18.390 "claimed": false, 00:28:18.390 "zoned": false, 00:28:18.390 "supported_io_types": { 00:28:18.390 "read": true, 00:28:18.390 "write": true, 00:28:18.390 "unmap": false, 00:28:18.390 "flush": false, 00:28:18.390 "reset": false, 00:28:18.390 "nvme_admin": false, 00:28:18.390 "nvme_io": false, 00:28:18.390 "nvme_io_md": false, 00:28:18.390 "write_zeroes": true, 00:28:18.390 "zcopy": false, 00:28:18.390 "get_zone_info": false, 00:28:18.390 "zone_management": false, 00:28:18.390 "zone_append": false, 00:28:18.390 "compare": false, 00:28:18.390 "compare_and_write": false, 00:28:18.390 "abort": false, 00:28:18.390 "seek_hole": false, 00:28:18.390 "seek_data": false, 00:28:18.390 "copy": false, 00:28:18.390 "nvme_iov_md": false 00:28:18.390 }, 00:28:18.390 "driver_specific": { 00:28:18.390 "compress": { 00:28:18.390 "name": "COMP_lvs0/lv0", 00:28:18.390 "base_bdev_name": "0d15c9dd-1356-4c11-bad7-61ae3c8bacfd", 00:28:18.390 "pm_path": "/tmp/pmem/5e8d9bc6-1980-489d-9c7a-fe59595a8385" 00:28:18.390 } 00:28:18.390 } 00:28:18.390 } 00:28:18.390 ] 00:28:18.390 12:10:04 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:18.390 12:10:04 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:18.390 [2024-07-25 12:10:04.482163] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f69481b15c0 PMD being used: compress_qat 00:28:18.390 [2024-07-25 12:10:04.484222] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x128e7e0 PMD being used: compress_qat 00:28:18.390 Running I/O for 3 seconds... 00:28:21.674 00:28:21.674 Latency(us) 00:28:21.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.674 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:21.674 Verification LBA range: start 0x0 length 0x3100 00:28:21.674 COMP_lvs0/lv0 : 3.01 4085.31 15.96 0.00 0.00 7774.30 129.43 13526.63 00:28:21.674 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:21.674 Verification LBA range: start 0x3100 length 0x3100 00:28:21.674 COMP_lvs0/lv0 : 3.01 4195.47 16.39 0.00 0.00 7590.99 121.24 12897.48 00:28:21.674 =================================================================================================================== 00:28:21.674 Total : 8280.78 32.35 0.00 0.00 7681.49 121.24 13526.63 00:28:21.674 0 00:28:21.674 12:10:07 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:21.674 12:10:07 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:21.674 12:10:07 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:21.933 12:10:07 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:21.933 12:10:07 compress_compdev -- compress/compress.sh@78 -- # killprocess 98489 00:28:21.933 12:10:07 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 98489 ']' 00:28:21.933 12:10:07 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 98489 00:28:21.933 12:10:07 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:21.933 12:10:07 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.933 12:10:08 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98489 00:28:21.933 12:10:08 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:21.933 12:10:08 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:21.933 12:10:08 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98489' 00:28:21.933 killing process with pid 98489 00:28:21.933 12:10:08 compress_compdev -- common/autotest_common.sh@969 -- # kill 98489 00:28:21.933 Received shutdown signal, test time was about 3.000000 seconds 00:28:21.933 00:28:21.933 Latency(us) 00:28:21.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.933 =================================================================================================================== 00:28:21.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.933 12:10:08 compress_compdev -- common/autotest_common.sh@974 -- # wait 98489 00:28:24.462 12:10:10 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:28:24.462 12:10:10 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:24.462 12:10:10 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=100703 00:28:24.462 12:10:10 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:24.462 12:10:10 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:24.462 12:10:10 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 100703 00:28:24.462 12:10:10 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 100703 ']' 00:28:24.462 12:10:10 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.462 12:10:10 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.462 12:10:10 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.462 12:10:10 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.462 12:10:10 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:24.721 [2024-07-25 12:10:10.585596] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:24.721 [2024-07-25 12:10:10.585658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100703 ] 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:24.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.721 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:24.721 [2024-07-25 12:10:10.706948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.721 [2024-07-25 12:10:10.795009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.721 [2024-07-25 12:10:10.795014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.658 [2024-07-25 12:10:11.473144] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:25.658 12:10:11 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:25.658 12:10:11 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:25.658 12:10:11 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:28:25.658 12:10:11 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:25.658 12:10:11 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:28.944 [2024-07-25 12:10:14.617854] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1332f00 PMD being used: compress_qat 00:28:28.944 12:10:14 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:28.944 12:10:14 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:29.203 [ 00:28:29.203 { 00:28:29.203 "name": "Nvme0n1", 00:28:29.203 "aliases": [ 00:28:29.203 "b21c8783-f9e4-434b-9ea3-22fecf6ff343" 00:28:29.203 ], 00:28:29.203 "product_name": "NVMe disk", 00:28:29.203 "block_size": 512, 00:28:29.203 "num_blocks": 3907029168, 00:28:29.203 "uuid": "b21c8783-f9e4-434b-9ea3-22fecf6ff343", 00:28:29.203 "assigned_rate_limits": { 00:28:29.203 "rw_ios_per_sec": 0, 00:28:29.203 "rw_mbytes_per_sec": 0, 00:28:29.203 "r_mbytes_per_sec": 0, 00:28:29.203 "w_mbytes_per_sec": 0 00:28:29.203 }, 00:28:29.203 "claimed": false, 00:28:29.203 "zoned": false, 00:28:29.203 "supported_io_types": { 00:28:29.203 "read": true, 00:28:29.203 "write": true, 00:28:29.203 "unmap": true, 00:28:29.203 "flush": true, 00:28:29.203 "reset": true, 00:28:29.203 "nvme_admin": true, 00:28:29.203 "nvme_io": true, 00:28:29.203 "nvme_io_md": false, 00:28:29.203 "write_zeroes": true, 00:28:29.203 "zcopy": false, 00:28:29.203 "get_zone_info": false, 00:28:29.203 "zone_management": false, 00:28:29.203 "zone_append": false, 00:28:29.203 "compare": false, 00:28:29.203 "compare_and_write": false, 00:28:29.203 "abort": true, 00:28:29.203 "seek_hole": false, 00:28:29.203 "seek_data": false, 00:28:29.203 "copy": false, 00:28:29.203 "nvme_iov_md": false 00:28:29.203 }, 00:28:29.203 "driver_specific": { 00:28:29.203 "nvme": [ 00:28:29.203 { 00:28:29.203 "pci_address": "0000:d8:00.0", 00:28:29.203 "trid": { 00:28:29.203 "trtype": "PCIe", 00:28:29.203 "traddr": "0000:d8:00.0" 00:28:29.203 }, 00:28:29.203 "ctrlr_data": { 00:28:29.203 "cntlid": 0, 00:28:29.203 "vendor_id": "0x8086", 00:28:29.203 "model_number": "INTEL SSDPE2KX020T8", 00:28:29.203 "serial_number": "BTLJ125505KA2P0BGN", 00:28:29.203 "firmware_revision": "VDV10170", 00:28:29.203 "oacs": { 00:28:29.203 "security": 0, 00:28:29.203 "format": 1, 00:28:29.203 "firmware": 1, 00:28:29.203 "ns_manage": 1 00:28:29.203 }, 00:28:29.203 "multi_ctrlr": false, 00:28:29.203 "ana_reporting": false 00:28:29.203 }, 00:28:29.203 "vs": { 00:28:29.203 "nvme_version": "1.2" 00:28:29.203 }, 00:28:29.203 "ns_data": { 00:28:29.203 "id": 1, 00:28:29.203 "can_share": false 00:28:29.203 } 00:28:29.203 } 00:28:29.203 ], 00:28:29.203 "mp_policy": "active_passive" 00:28:29.203 } 00:28:29.203 } 00:28:29.203 ] 00:28:29.203 12:10:15 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:29.203 12:10:15 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:29.203 [2024-07-25 12:10:15.306743] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x116a140 PMD being used: compress_qat 00:28:30.579 b99207a8-082c-42cd-bd25-be9a887e9b89 00:28:30.579 12:10:16 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:30.579 e3a2d1c6-3f7e-4888-9609-7b1dd89326a0 00:28:30.579 12:10:16 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:30.579 12:10:16 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:30.579 12:10:16 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:30.579 12:10:16 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:30.579 12:10:16 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:30.579 12:10:16 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:30.579 12:10:16 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:30.837 12:10:16 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:31.096 [ 00:28:31.096 { 00:28:31.096 "name": "e3a2d1c6-3f7e-4888-9609-7b1dd89326a0", 00:28:31.096 "aliases": [ 00:28:31.096 "lvs0/lv0" 00:28:31.096 ], 00:28:31.096 "product_name": "Logical Volume", 00:28:31.096 "block_size": 512, 00:28:31.096 "num_blocks": 204800, 00:28:31.096 "uuid": "e3a2d1c6-3f7e-4888-9609-7b1dd89326a0", 00:28:31.096 "assigned_rate_limits": { 00:28:31.096 "rw_ios_per_sec": 0, 00:28:31.096 "rw_mbytes_per_sec": 0, 00:28:31.096 "r_mbytes_per_sec": 0, 00:28:31.096 "w_mbytes_per_sec": 0 00:28:31.096 }, 00:28:31.096 "claimed": false, 00:28:31.096 "zoned": false, 00:28:31.096 "supported_io_types": { 00:28:31.096 "read": true, 00:28:31.096 "write": true, 00:28:31.096 "unmap": true, 00:28:31.096 "flush": false, 00:28:31.096 "reset": true, 00:28:31.096 "nvme_admin": false, 00:28:31.096 "nvme_io": false, 00:28:31.096 "nvme_io_md": false, 00:28:31.096 "write_zeroes": true, 00:28:31.096 "zcopy": false, 00:28:31.096 "get_zone_info": false, 00:28:31.096 "zone_management": false, 00:28:31.096 "zone_append": false, 00:28:31.096 "compare": false, 00:28:31.096 "compare_and_write": false, 00:28:31.096 "abort": false, 00:28:31.096 "seek_hole": true, 00:28:31.096 "seek_data": true, 00:28:31.096 "copy": false, 00:28:31.096 "nvme_iov_md": false 00:28:31.096 }, 00:28:31.096 "driver_specific": { 00:28:31.096 "lvol": { 00:28:31.096 "lvol_store_uuid": "b99207a8-082c-42cd-bd25-be9a887e9b89", 00:28:31.096 "base_bdev": "Nvme0n1", 00:28:31.096 "thin_provision": true, 00:28:31.096 "num_allocated_clusters": 0, 00:28:31.096 "snapshot": false, 00:28:31.096 "clone": false, 00:28:31.096 "esnap_clone": false 00:28:31.096 } 00:28:31.096 } 00:28:31.096 } 00:28:31.096 ] 00:28:31.096 12:10:16 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:31.096 12:10:16 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:28:31.096 12:10:16 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:28:31.096 [2024-07-25 12:10:17.188743] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:31.096 COMP_lvs0/lv0 00:28:31.096 12:10:17 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:31.096 12:10:17 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:31.096 12:10:17 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:31.096 12:10:17 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:31.096 12:10:17 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:31.096 12:10:17 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:31.096 12:10:17 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:31.354 12:10:17 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:31.612 [ 00:28:31.612 { 00:28:31.612 "name": "COMP_lvs0/lv0", 00:28:31.612 "aliases": [ 00:28:31.612 "ad44e693-a584-5560-8cd3-4c4e13d6c28d" 00:28:31.612 ], 00:28:31.612 "product_name": "compress", 00:28:31.612 "block_size": 512, 00:28:31.612 "num_blocks": 200704, 00:28:31.612 "uuid": "ad44e693-a584-5560-8cd3-4c4e13d6c28d", 00:28:31.612 "assigned_rate_limits": { 00:28:31.612 "rw_ios_per_sec": 0, 00:28:31.612 "rw_mbytes_per_sec": 0, 00:28:31.612 "r_mbytes_per_sec": 0, 00:28:31.612 "w_mbytes_per_sec": 0 00:28:31.612 }, 00:28:31.612 "claimed": false, 00:28:31.612 "zoned": false, 00:28:31.612 "supported_io_types": { 00:28:31.612 "read": true, 00:28:31.612 "write": true, 00:28:31.612 "unmap": false, 00:28:31.612 "flush": false, 00:28:31.612 "reset": false, 00:28:31.612 "nvme_admin": false, 00:28:31.612 "nvme_io": false, 00:28:31.612 "nvme_io_md": false, 00:28:31.612 "write_zeroes": true, 00:28:31.612 "zcopy": false, 00:28:31.612 "get_zone_info": false, 00:28:31.612 "zone_management": false, 00:28:31.612 "zone_append": false, 00:28:31.612 "compare": false, 00:28:31.612 "compare_and_write": false, 00:28:31.612 "abort": false, 00:28:31.612 "seek_hole": false, 00:28:31.612 "seek_data": false, 00:28:31.612 "copy": false, 00:28:31.613 "nvme_iov_md": false 00:28:31.613 }, 00:28:31.613 "driver_specific": { 00:28:31.613 "compress": { 00:28:31.613 "name": "COMP_lvs0/lv0", 00:28:31.613 "base_bdev_name": "e3a2d1c6-3f7e-4888-9609-7b1dd89326a0", 00:28:31.613 "pm_path": "/tmp/pmem/df3b5989-e5c1-47c6-8e02-72f06aac1b32" 00:28:31.613 } 00:28:31.613 } 00:28:31.613 } 00:28:31.613 ] 00:28:31.613 12:10:17 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:31.613 12:10:17 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:31.871 [2024-07-25 12:10:17.767018] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fd6d41b15c0 PMD being used: compress_qat 00:28:31.871 [2024-07-25 12:10:17.769077] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x132faf0 PMD being used: compress_qat 00:28:31.871 Running I/O for 3 seconds... 00:28:35.186 00:28:35.186 Latency(us) 00:28:35.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.186 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:35.186 Verification LBA range: start 0x0 length 0x3100 00:28:35.186 COMP_lvs0/lv0 : 3.01 3936.73 15.38 0.00 0.00 8074.65 128.61 15414.07 00:28:35.186 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:35.186 Verification LBA range: start 0x3100 length 0x3100 00:28:35.186 COMP_lvs0/lv0 : 3.01 4079.04 15.93 0.00 0.00 7806.17 119.60 15414.07 00:28:35.186 =================================================================================================================== 00:28:35.186 Total : 8015.78 31.31 0.00 0.00 7938.04 119.60 15414.07 00:28:35.186 0 00:28:35.186 12:10:20 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:35.186 12:10:20 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:35.186 12:10:21 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:35.186 12:10:21 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:35.186 12:10:21 compress_compdev -- compress/compress.sh@78 -- # killprocess 100703 00:28:35.186 12:10:21 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 100703 ']' 00:28:35.186 12:10:21 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 100703 00:28:35.186 12:10:21 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:35.186 12:10:21 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.186 12:10:21 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100703 00:28:35.444 12:10:21 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:35.444 12:10:21 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:35.444 12:10:21 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100703' 00:28:35.444 killing process with pid 100703 00:28:35.444 12:10:21 compress_compdev -- common/autotest_common.sh@969 -- # kill 100703 00:28:35.445 Received shutdown signal, test time was about 3.000000 seconds 00:28:35.445 00:28:35.445 Latency(us) 00:28:35.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.445 =================================================================================================================== 00:28:35.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.445 12:10:21 compress_compdev -- common/autotest_common.sh@974 -- # wait 100703 00:28:37.978 12:10:23 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:28:37.978 12:10:23 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:37.978 12:10:23 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=102983 00:28:37.978 12:10:23 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:37.978 12:10:23 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:37.978 12:10:23 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 102983 00:28:37.978 12:10:23 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 102983 ']' 00:28:37.978 12:10:23 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.978 12:10:23 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.978 12:10:23 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.978 12:10:23 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.978 12:10:23 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:37.978 [2024-07-25 12:10:23.816482] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:37.978 [2024-07-25 12:10:23.816543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102983 ] 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.978 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:37.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:37.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.979 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:37.979 [2024-07-25 12:10:23.937628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:37.979 [2024-07-25 12:10:24.020911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.979 [2024-07-25 12:10:24.020916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.914 [2024-07-25 12:10:24.699417] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:38.914 12:10:24 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.914 12:10:24 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:38.914 12:10:24 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:28:38.914 12:10:24 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:38.914 12:10:24 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:42.200 [2024-07-25 12:10:27.847882] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x20bbf00 PMD being used: compress_qat 00:28:42.200 12:10:27 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:42.200 12:10:27 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:42.200 12:10:27 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:42.200 12:10:27 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:42.200 12:10:27 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:42.200 12:10:27 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:42.200 12:10:27 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:42.200 12:10:28 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:42.200 [ 00:28:42.200 { 00:28:42.200 "name": "Nvme0n1", 00:28:42.200 "aliases": [ 00:28:42.200 "1dd55a1e-4953-42cc-a8cf-4a9c06409221" 00:28:42.200 ], 00:28:42.200 "product_name": "NVMe disk", 00:28:42.200 "block_size": 512, 00:28:42.200 "num_blocks": 3907029168, 00:28:42.200 "uuid": "1dd55a1e-4953-42cc-a8cf-4a9c06409221", 00:28:42.200 "assigned_rate_limits": { 00:28:42.200 "rw_ios_per_sec": 0, 00:28:42.200 "rw_mbytes_per_sec": 0, 00:28:42.200 "r_mbytes_per_sec": 0, 00:28:42.200 "w_mbytes_per_sec": 0 00:28:42.200 }, 00:28:42.200 "claimed": false, 00:28:42.200 "zoned": false, 00:28:42.200 "supported_io_types": { 00:28:42.200 "read": true, 00:28:42.200 "write": true, 00:28:42.200 "unmap": true, 00:28:42.200 "flush": true, 00:28:42.200 "reset": true, 00:28:42.200 "nvme_admin": true, 00:28:42.200 "nvme_io": true, 00:28:42.200 "nvme_io_md": false, 00:28:42.200 "write_zeroes": true, 00:28:42.200 "zcopy": false, 00:28:42.200 "get_zone_info": false, 00:28:42.200 "zone_management": false, 00:28:42.200 "zone_append": false, 00:28:42.200 "compare": false, 00:28:42.200 "compare_and_write": false, 00:28:42.200 "abort": true, 00:28:42.200 "seek_hole": false, 00:28:42.200 "seek_data": false, 00:28:42.200 "copy": false, 00:28:42.200 "nvme_iov_md": false 00:28:42.200 }, 00:28:42.200 "driver_specific": { 00:28:42.200 "nvme": [ 00:28:42.200 { 00:28:42.200 "pci_address": "0000:d8:00.0", 00:28:42.200 "trid": { 00:28:42.200 "trtype": "PCIe", 00:28:42.200 "traddr": "0000:d8:00.0" 00:28:42.200 }, 00:28:42.200 "ctrlr_data": { 00:28:42.200 "cntlid": 0, 00:28:42.200 "vendor_id": "0x8086", 00:28:42.200 "model_number": "INTEL SSDPE2KX020T8", 00:28:42.200 "serial_number": "BTLJ125505KA2P0BGN", 00:28:42.200 "firmware_revision": "VDV10170", 00:28:42.200 "oacs": { 00:28:42.200 "security": 0, 00:28:42.200 "format": 1, 00:28:42.200 "firmware": 1, 00:28:42.200 "ns_manage": 1 00:28:42.200 }, 00:28:42.200 "multi_ctrlr": false, 00:28:42.200 "ana_reporting": false 00:28:42.200 }, 00:28:42.200 "vs": { 00:28:42.200 "nvme_version": "1.2" 00:28:42.200 }, 00:28:42.200 "ns_data": { 00:28:42.200 "id": 1, 00:28:42.200 "can_share": false 00:28:42.200 } 00:28:42.200 } 00:28:42.200 ], 00:28:42.200 "mp_policy": "active_passive" 00:28:42.200 } 00:28:42.200 } 00:28:42.200 ] 00:28:42.459 12:10:28 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:42.459 12:10:28 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:42.459 [2024-07-25 12:10:28.544855] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ef3140 PMD being used: compress_qat 00:28:43.834 aef3067d-ac63-44ef-abe8-3960c919d507 00:28:43.834 12:10:29 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:43.834 5ea85d08-b073-42e5-b386-38bc066aa068 00:28:43.834 12:10:29 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:43.834 12:10:29 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:43.834 12:10:29 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:43.834 12:10:29 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:43.835 12:10:29 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:43.835 12:10:29 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:43.835 12:10:29 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:44.093 12:10:29 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:44.093 [ 00:28:44.093 { 00:28:44.093 "name": "5ea85d08-b073-42e5-b386-38bc066aa068", 00:28:44.093 "aliases": [ 00:28:44.094 "lvs0/lv0" 00:28:44.094 ], 00:28:44.094 "product_name": "Logical Volume", 00:28:44.094 "block_size": 512, 00:28:44.094 "num_blocks": 204800, 00:28:44.094 "uuid": "5ea85d08-b073-42e5-b386-38bc066aa068", 00:28:44.094 "assigned_rate_limits": { 00:28:44.094 "rw_ios_per_sec": 0, 00:28:44.094 "rw_mbytes_per_sec": 0, 00:28:44.094 "r_mbytes_per_sec": 0, 00:28:44.094 "w_mbytes_per_sec": 0 00:28:44.094 }, 00:28:44.094 "claimed": false, 00:28:44.094 "zoned": false, 00:28:44.094 "supported_io_types": { 00:28:44.094 "read": true, 00:28:44.094 "write": true, 00:28:44.094 "unmap": true, 00:28:44.094 "flush": false, 00:28:44.094 "reset": true, 00:28:44.094 "nvme_admin": false, 00:28:44.094 "nvme_io": false, 00:28:44.094 "nvme_io_md": false, 00:28:44.094 "write_zeroes": true, 00:28:44.094 "zcopy": false, 00:28:44.094 "get_zone_info": false, 00:28:44.094 "zone_management": false, 00:28:44.094 "zone_append": false, 00:28:44.094 "compare": false, 00:28:44.094 "compare_and_write": false, 00:28:44.094 "abort": false, 00:28:44.094 "seek_hole": true, 00:28:44.094 "seek_data": true, 00:28:44.094 "copy": false, 00:28:44.094 "nvme_iov_md": false 00:28:44.094 }, 00:28:44.094 "driver_specific": { 00:28:44.094 "lvol": { 00:28:44.094 "lvol_store_uuid": "aef3067d-ac63-44ef-abe8-3960c919d507", 00:28:44.094 "base_bdev": "Nvme0n1", 00:28:44.094 "thin_provision": true, 00:28:44.094 "num_allocated_clusters": 0, 00:28:44.094 "snapshot": false, 00:28:44.094 "clone": false, 00:28:44.094 "esnap_clone": false 00:28:44.094 } 00:28:44.094 } 00:28:44.094 } 00:28:44.094 ] 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:44.352 12:10:30 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:28:44.352 12:10:30 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:28:44.352 [2024-07-25 12:10:30.435921] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:44.352 COMP_lvs0/lv0 00:28:44.352 12:10:30 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:44.352 12:10:30 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:44.610 12:10:30 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:44.869 [ 00:28:44.869 { 00:28:44.869 "name": "COMP_lvs0/lv0", 00:28:44.869 "aliases": [ 00:28:44.869 "e496bfa0-841a-554a-837f-07c71d3db483" 00:28:44.869 ], 00:28:44.869 "product_name": "compress", 00:28:44.869 "block_size": 4096, 00:28:44.869 "num_blocks": 25088, 00:28:44.869 "uuid": "e496bfa0-841a-554a-837f-07c71d3db483", 00:28:44.869 "assigned_rate_limits": { 00:28:44.869 "rw_ios_per_sec": 0, 00:28:44.869 "rw_mbytes_per_sec": 0, 00:28:44.869 "r_mbytes_per_sec": 0, 00:28:44.869 "w_mbytes_per_sec": 0 00:28:44.869 }, 00:28:44.869 "claimed": false, 00:28:44.869 "zoned": false, 00:28:44.869 "supported_io_types": { 00:28:44.869 "read": true, 00:28:44.869 "write": true, 00:28:44.869 "unmap": false, 00:28:44.869 "flush": false, 00:28:44.869 "reset": false, 00:28:44.869 "nvme_admin": false, 00:28:44.869 "nvme_io": false, 00:28:44.869 "nvme_io_md": false, 00:28:44.869 "write_zeroes": true, 00:28:44.869 "zcopy": false, 00:28:44.869 "get_zone_info": false, 00:28:44.869 "zone_management": false, 00:28:44.869 "zone_append": false, 00:28:44.869 "compare": false, 00:28:44.869 "compare_and_write": false, 00:28:44.869 "abort": false, 00:28:44.869 "seek_hole": false, 00:28:44.869 "seek_data": false, 00:28:44.869 "copy": false, 00:28:44.869 "nvme_iov_md": false 00:28:44.869 }, 00:28:44.869 "driver_specific": { 00:28:44.869 "compress": { 00:28:44.869 "name": "COMP_lvs0/lv0", 00:28:44.869 "base_bdev_name": "5ea85d08-b073-42e5-b386-38bc066aa068", 00:28:44.869 "pm_path": "/tmp/pmem/44b383d4-bf1f-40b8-9c9f-59c8bf2b384c" 00:28:44.869 } 00:28:44.869 } 00:28:44.869 } 00:28:44.869 ] 00:28:44.869 12:10:30 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:44.869 12:10:30 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:45.127 [2024-07-25 12:10:31.002143] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f09e01b15c0 PMD being used: compress_qat 00:28:45.127 [2024-07-25 12:10:31.004203] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x20b8af0 PMD being used: compress_qat 00:28:45.127 Running I/O for 3 seconds... 00:28:48.410 00:28:48.410 Latency(us) 00:28:48.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.410 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:48.410 Verification LBA range: start 0x0 length 0x3100 00:28:48.410 COMP_lvs0/lv0 : 3.01 4051.74 15.83 0.00 0.00 7845.98 175.31 12792.63 00:28:48.410 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:48.410 Verification LBA range: start 0x3100 length 0x3100 00:28:48.410 COMP_lvs0/lv0 : 3.01 4119.51 16.09 0.00 0.00 7727.12 167.12 13369.34 00:28:48.410 =================================================================================================================== 00:28:48.410 Total : 8171.25 31.92 0.00 0.00 7786.08 167.12 13369.34 00:28:48.410 0 00:28:48.410 12:10:34 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:48.410 12:10:34 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:48.410 12:10:34 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:48.410 12:10:34 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:48.410 12:10:34 compress_compdev -- compress/compress.sh@78 -- # killprocess 102983 00:28:48.410 12:10:34 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 102983 ']' 00:28:48.410 12:10:34 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 102983 00:28:48.410 12:10:34 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:48.410 12:10:34 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.410 12:10:34 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102983 00:28:48.668 12:10:34 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:48.668 12:10:34 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:48.668 12:10:34 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102983' 00:28:48.668 killing process with pid 102983 00:28:48.668 12:10:34 compress_compdev -- common/autotest_common.sh@969 -- # kill 102983 00:28:48.668 Received shutdown signal, test time was about 3.000000 seconds 00:28:48.668 00:28:48.668 Latency(us) 00:28:48.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.668 =================================================================================================================== 00:28:48.668 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.668 12:10:34 compress_compdev -- common/autotest_common.sh@974 -- # wait 102983 00:28:51.199 12:10:37 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:28:51.199 12:10:37 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:51.199 12:10:37 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=105157 00:28:51.199 12:10:37 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:51.199 12:10:37 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:28:51.199 12:10:37 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 105157 00:28:51.199 12:10:37 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 105157 ']' 00:28:51.199 12:10:37 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.199 12:10:37 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:51.199 12:10:37 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.199 12:10:37 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:51.199 12:10:37 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:51.199 [2024-07-25 12:10:37.079519] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:51.199 [2024-07-25 12:10:37.079581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105157 ] 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:51.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:51.199 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:51.199 [2024-07-25 12:10:37.212963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.199 [2024-07-25 12:10:37.297309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.199 [2024-07-25 12:10:37.297403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.199 [2024-07-25 12:10:37.297408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.197 [2024-07-25 12:10:37.972911] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:52.197 12:10:38 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:52.197 12:10:38 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:52.197 12:10:38 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:28:52.197 12:10:38 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:52.197 12:10:38 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:55.482 [2024-07-25 12:10:41.392445] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ad9aa0 PMD being used: compress_qat 00:28:55.482 12:10:41 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:55.482 12:10:41 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:55.482 12:10:41 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:55.482 12:10:41 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:55.482 12:10:41 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:55.482 12:10:41 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:55.482 12:10:41 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:55.739 12:10:41 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:55.997 [ 00:28:55.997 { 00:28:55.997 "name": "Nvme0n1", 00:28:55.997 "aliases": [ 00:28:55.997 "f0956a03-42d4-4fa0-81e3-0c18b2382bc3" 00:28:55.997 ], 00:28:55.997 "product_name": "NVMe disk", 00:28:55.997 "block_size": 512, 00:28:55.997 "num_blocks": 3907029168, 00:28:55.997 "uuid": "f0956a03-42d4-4fa0-81e3-0c18b2382bc3", 00:28:55.997 "assigned_rate_limits": { 00:28:55.997 "rw_ios_per_sec": 0, 00:28:55.997 "rw_mbytes_per_sec": 0, 00:28:55.997 "r_mbytes_per_sec": 0, 00:28:55.997 "w_mbytes_per_sec": 0 00:28:55.997 }, 00:28:55.997 "claimed": false, 00:28:55.997 "zoned": false, 00:28:55.997 "supported_io_types": { 00:28:55.997 "read": true, 00:28:55.997 "write": true, 00:28:55.997 "unmap": true, 00:28:55.997 "flush": true, 00:28:55.997 "reset": true, 00:28:55.997 "nvme_admin": true, 00:28:55.997 "nvme_io": true, 00:28:55.997 "nvme_io_md": false, 00:28:55.997 "write_zeroes": true, 00:28:55.997 "zcopy": false, 00:28:55.997 "get_zone_info": false, 00:28:55.997 "zone_management": false, 00:28:55.997 "zone_append": false, 00:28:55.997 "compare": false, 00:28:55.997 "compare_and_write": false, 00:28:55.997 "abort": true, 00:28:55.997 "seek_hole": false, 00:28:55.997 "seek_data": false, 00:28:55.997 "copy": false, 00:28:55.997 "nvme_iov_md": false 00:28:55.997 }, 00:28:55.997 "driver_specific": { 00:28:55.997 "nvme": [ 00:28:55.997 { 00:28:55.997 "pci_address": "0000:d8:00.0", 00:28:55.997 "trid": { 00:28:55.997 "trtype": "PCIe", 00:28:55.997 "traddr": "0000:d8:00.0" 00:28:55.997 }, 00:28:55.997 "ctrlr_data": { 00:28:55.997 "cntlid": 0, 00:28:55.997 "vendor_id": "0x8086", 00:28:55.997 "model_number": "INTEL SSDPE2KX020T8", 00:28:55.997 "serial_number": "BTLJ125505KA2P0BGN", 00:28:55.997 "firmware_revision": "VDV10170", 00:28:55.997 "oacs": { 00:28:55.997 "security": 0, 00:28:55.997 "format": 1, 00:28:55.997 "firmware": 1, 00:28:55.997 "ns_manage": 1 00:28:55.997 }, 00:28:55.997 "multi_ctrlr": false, 00:28:55.997 "ana_reporting": false 00:28:55.997 }, 00:28:55.997 "vs": { 00:28:55.997 "nvme_version": "1.2" 00:28:55.997 }, 00:28:55.997 "ns_data": { 00:28:55.997 "id": 1, 00:28:55.997 "can_share": false 00:28:55.997 } 00:28:55.997 } 00:28:55.997 ], 00:28:55.997 "mp_policy": "active_passive" 00:28:55.997 } 00:28:55.997 } 00:28:55.997 ] 00:28:55.997 12:10:41 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:55.997 12:10:41 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:55.997 [2024-07-25 12:10:42.101276] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ad9a40 PMD being used: compress_qat 00:28:57.373 ff2c0106-709a-4948-9e68-c666f2a511d6 00:28:57.373 12:10:43 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:57.373 d0e16164-ffc6-426e-a683-ba27ce6c5fa2 00:28:57.373 12:10:43 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:57.373 12:10:43 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:57.373 12:10:43 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:57.373 12:10:43 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:57.373 12:10:43 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:57.373 12:10:43 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:57.373 12:10:43 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:57.631 12:10:43 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:57.889 [ 00:28:57.889 { 00:28:57.889 "name": "d0e16164-ffc6-426e-a683-ba27ce6c5fa2", 00:28:57.889 "aliases": [ 00:28:57.889 "lvs0/lv0" 00:28:57.889 ], 00:28:57.889 "product_name": "Logical Volume", 00:28:57.889 "block_size": 512, 00:28:57.889 "num_blocks": 204800, 00:28:57.889 "uuid": "d0e16164-ffc6-426e-a683-ba27ce6c5fa2", 00:28:57.889 "assigned_rate_limits": { 00:28:57.889 "rw_ios_per_sec": 0, 00:28:57.889 "rw_mbytes_per_sec": 0, 00:28:57.889 "r_mbytes_per_sec": 0, 00:28:57.889 "w_mbytes_per_sec": 0 00:28:57.890 }, 00:28:57.890 "claimed": false, 00:28:57.890 "zoned": false, 00:28:57.890 "supported_io_types": { 00:28:57.890 "read": true, 00:28:57.890 "write": true, 00:28:57.890 "unmap": true, 00:28:57.890 "flush": false, 00:28:57.890 "reset": true, 00:28:57.890 "nvme_admin": false, 00:28:57.890 "nvme_io": false, 00:28:57.890 "nvme_io_md": false, 00:28:57.890 "write_zeroes": true, 00:28:57.890 "zcopy": false, 00:28:57.890 "get_zone_info": false, 00:28:57.890 "zone_management": false, 00:28:57.890 "zone_append": false, 00:28:57.890 "compare": false, 00:28:57.890 "compare_and_write": false, 00:28:57.890 "abort": false, 00:28:57.890 "seek_hole": true, 00:28:57.890 "seek_data": true, 00:28:57.890 "copy": false, 00:28:57.890 "nvme_iov_md": false 00:28:57.890 }, 00:28:57.890 "driver_specific": { 00:28:57.890 "lvol": { 00:28:57.890 "lvol_store_uuid": "ff2c0106-709a-4948-9e68-c666f2a511d6", 00:28:57.890 "base_bdev": "Nvme0n1", 00:28:57.890 "thin_provision": true, 00:28:57.890 "num_allocated_clusters": 0, 00:28:57.890 "snapshot": false, 00:28:57.890 "clone": false, 00:28:57.890 "esnap_clone": false 00:28:57.890 } 00:28:57.890 } 00:28:57.890 } 00:28:57.890 ] 00:28:57.890 12:10:43 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:57.890 12:10:43 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:57.890 12:10:43 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:58.148 [2024-07-25 12:10:44.072247] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:58.148 COMP_lvs0/lv0 00:28:58.148 12:10:44 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:58.148 12:10:44 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:58.148 12:10:44 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:58.148 12:10:44 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:58.148 12:10:44 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:58.148 12:10:44 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:58.148 12:10:44 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:58.407 12:10:44 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:58.407 [ 00:28:58.407 { 00:28:58.407 "name": "COMP_lvs0/lv0", 00:28:58.407 "aliases": [ 00:28:58.407 "26ef0dc1-79b0-5288-9321-a2c59b1aaf4d" 00:28:58.407 ], 00:28:58.407 "product_name": "compress", 00:28:58.407 "block_size": 512, 00:28:58.407 "num_blocks": 200704, 00:28:58.407 "uuid": "26ef0dc1-79b0-5288-9321-a2c59b1aaf4d", 00:28:58.407 "assigned_rate_limits": { 00:28:58.407 "rw_ios_per_sec": 0, 00:28:58.407 "rw_mbytes_per_sec": 0, 00:28:58.407 "r_mbytes_per_sec": 0, 00:28:58.407 "w_mbytes_per_sec": 0 00:28:58.407 }, 00:28:58.407 "claimed": false, 00:28:58.407 "zoned": false, 00:28:58.407 "supported_io_types": { 00:28:58.407 "read": true, 00:28:58.407 "write": true, 00:28:58.407 "unmap": false, 00:28:58.407 "flush": false, 00:28:58.407 "reset": false, 00:28:58.407 "nvme_admin": false, 00:28:58.407 "nvme_io": false, 00:28:58.407 "nvme_io_md": false, 00:28:58.407 "write_zeroes": true, 00:28:58.407 "zcopy": false, 00:28:58.407 "get_zone_info": false, 00:28:58.407 "zone_management": false, 00:28:58.407 "zone_append": false, 00:28:58.407 "compare": false, 00:28:58.407 "compare_and_write": false, 00:28:58.407 "abort": false, 00:28:58.407 "seek_hole": false, 00:28:58.407 "seek_data": false, 00:28:58.407 "copy": false, 00:28:58.407 "nvme_iov_md": false 00:28:58.407 }, 00:28:58.407 "driver_specific": { 00:28:58.407 "compress": { 00:28:58.407 "name": "COMP_lvs0/lv0", 00:28:58.407 "base_bdev_name": "d0e16164-ffc6-426e-a683-ba27ce6c5fa2", 00:28:58.407 "pm_path": "/tmp/pmem/cf3ea6d7-a067-4e1a-b25c-2ef68e04289a" 00:28:58.407 } 00:28:58.407 } 00:28:58.407 } 00:28:58.407 ] 00:28:58.666 12:10:44 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:58.666 12:10:44 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:58.666 [2024-07-25 12:10:44.637312] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f48001b1350 PMD being used: compress_qat 00:28:58.666 I/O targets: 00:28:58.666 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:28:58.666 00:28:58.666 00:28:58.666 CUnit - A unit testing framework for C - Version 2.1-3 00:28:58.666 http://cunit.sourceforge.net/ 00:28:58.666 00:28:58.666 00:28:58.666 Suite: bdevio tests on: COMP_lvs0/lv0 00:28:58.666 Test: blockdev write read block ...passed 00:28:58.666 Test: blockdev write zeroes read block ...passed 00:28:58.666 Test: blockdev write zeroes read no split ...passed 00:28:58.666 Test: blockdev write zeroes read split ...passed 00:28:58.666 Test: blockdev write zeroes read split partial ...passed 00:28:58.666 Test: blockdev reset ...[2024-07-25 12:10:44.698738] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:28:58.666 passed 00:28:58.666 Test: blockdev write read 8 blocks ...passed 00:28:58.666 Test: blockdev write read size > 128k ...passed 00:28:58.666 Test: blockdev write read invalid size ...passed 00:28:58.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:58.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:58.666 Test: blockdev write read max offset ...passed 00:28:58.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:58.666 Test: blockdev writev readv 8 blocks ...passed 00:28:58.666 Test: blockdev writev readv 30 x 1block ...passed 00:28:58.666 Test: blockdev writev readv block ...passed 00:28:58.666 Test: blockdev writev readv size > 128k ...passed 00:28:58.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:58.666 Test: blockdev comparev and writev ...passed 00:28:58.666 Test: blockdev nvme passthru rw ...passed 00:28:58.666 Test: blockdev nvme passthru vendor specific ...passed 00:28:58.666 Test: blockdev nvme admin passthru ...passed 00:28:58.666 Test: blockdev copy ...passed 00:28:58.666 00:28:58.666 Run Summary: Type Total Ran Passed Failed Inactive 00:28:58.666 suites 1 1 n/a 0 0 00:28:58.666 tests 23 23 23 0 0 00:28:58.666 asserts 130 130 130 0 n/a 00:28:58.666 00:28:58.666 Elapsed time = 0.202 seconds 00:28:58.666 0 00:28:58.666 12:10:44 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:28:58.666 12:10:44 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:58.925 12:10:44 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:59.183 12:10:45 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:28:59.183 12:10:45 compress_compdev -- compress/compress.sh@62 -- # killprocess 105157 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 105157 ']' 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 105157 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105157 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105157' 00:28:59.183 killing process with pid 105157 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@969 -- # kill 105157 00:28:59.183 12:10:45 compress_compdev -- common/autotest_common.sh@974 -- # wait 105157 00:29:01.714 12:10:47 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:29:01.714 12:10:47 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:29:01.714 00:29:01.714 real 0m50.640s 00:29:01.714 user 1m54.931s 00:29:01.714 sys 0m5.402s 00:29:01.714 12:10:47 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:01.714 12:10:47 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:01.714 ************************************ 00:29:01.714 END TEST compress_compdev 00:29:01.714 ************************************ 00:29:01.714 12:10:47 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:29:01.714 12:10:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:01.714 12:10:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:01.714 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:29:01.714 ************************************ 00:29:01.714 START TEST compress_isal 00:29:01.714 ************************************ 00:29:01.714 12:10:47 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:29:01.974 * Looking for test storage... 00:29:01.974 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:29:01.974 12:10:47 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:01.974 12:10:47 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:01.975 12:10:47 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.975 12:10:47 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.975 12:10:47 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.975 12:10:47 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.975 12:10:47 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.975 12:10:47 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.975 12:10:47 compress_isal -- paths/export.sh@5 -- # export PATH 00:29:01.975 12:10:47 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@47 -- # : 0 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.975 12:10:47 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=106961 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@73 -- # waitforlisten 106961 00:29:01.975 12:10:47 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:29:01.975 12:10:47 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 106961 ']' 00:29:01.975 12:10:47 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.975 12:10:47 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.975 12:10:47 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.975 12:10:47 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.975 12:10:47 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:01.975 [2024-07-25 12:10:47.993237] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:01.975 [2024-07-25 12:10:47.993301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106961 ] 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:01.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.975 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:01.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.976 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:02.236 [2024-07-25 12:10:48.114364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:02.236 [2024-07-25 12:10:48.202112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.236 [2024-07-25 12:10:48.202118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.803 12:10:48 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.803 12:10:48 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:02.803 12:10:48 compress_isal -- compress/compress.sh@74 -- # create_vols 00:29:02.803 12:10:48 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:02.803 12:10:48 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:06.086 12:10:52 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:06.086 12:10:52 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:06.086 12:10:52 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:06.086 12:10:52 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:06.086 12:10:52 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:06.086 12:10:52 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:06.086 12:10:52 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:06.344 12:10:52 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:06.603 [ 00:29:06.603 { 00:29:06.603 "name": "Nvme0n1", 00:29:06.603 "aliases": [ 00:29:06.603 "ddb32383-4753-4f0e-b6a8-b2e461a0bc61" 00:29:06.603 ], 00:29:06.603 "product_name": "NVMe disk", 00:29:06.603 "block_size": 512, 00:29:06.603 "num_blocks": 3907029168, 00:29:06.603 "uuid": "ddb32383-4753-4f0e-b6a8-b2e461a0bc61", 00:29:06.603 "assigned_rate_limits": { 00:29:06.603 "rw_ios_per_sec": 0, 00:29:06.603 "rw_mbytes_per_sec": 0, 00:29:06.603 "r_mbytes_per_sec": 0, 00:29:06.603 "w_mbytes_per_sec": 0 00:29:06.603 }, 00:29:06.603 "claimed": false, 00:29:06.603 "zoned": false, 00:29:06.603 "supported_io_types": { 00:29:06.603 "read": true, 00:29:06.603 "write": true, 00:29:06.603 "unmap": true, 00:29:06.603 "flush": true, 00:29:06.603 "reset": true, 00:29:06.603 "nvme_admin": true, 00:29:06.603 "nvme_io": true, 00:29:06.603 "nvme_io_md": false, 00:29:06.603 "write_zeroes": true, 00:29:06.603 "zcopy": false, 00:29:06.603 "get_zone_info": false, 00:29:06.603 "zone_management": false, 00:29:06.603 "zone_append": false, 00:29:06.603 "compare": false, 00:29:06.603 "compare_and_write": false, 00:29:06.603 "abort": true, 00:29:06.603 "seek_hole": false, 00:29:06.603 "seek_data": false, 00:29:06.603 "copy": false, 00:29:06.603 "nvme_iov_md": false 00:29:06.603 }, 00:29:06.603 "driver_specific": { 00:29:06.603 "nvme": [ 00:29:06.603 { 00:29:06.603 "pci_address": "0000:d8:00.0", 00:29:06.603 "trid": { 00:29:06.603 "trtype": "PCIe", 00:29:06.603 "traddr": "0000:d8:00.0" 00:29:06.603 }, 00:29:06.603 "ctrlr_data": { 00:29:06.603 "cntlid": 0, 00:29:06.603 "vendor_id": "0x8086", 00:29:06.603 "model_number": "INTEL SSDPE2KX020T8", 00:29:06.603 "serial_number": "BTLJ125505KA2P0BGN", 00:29:06.603 "firmware_revision": "VDV10170", 00:29:06.603 "oacs": { 00:29:06.603 "security": 0, 00:29:06.603 "format": 1, 00:29:06.603 "firmware": 1, 00:29:06.603 "ns_manage": 1 00:29:06.603 }, 00:29:06.603 "multi_ctrlr": false, 00:29:06.603 "ana_reporting": false 00:29:06.603 }, 00:29:06.603 "vs": { 00:29:06.603 "nvme_version": "1.2" 00:29:06.603 }, 00:29:06.603 "ns_data": { 00:29:06.603 "id": 1, 00:29:06.603 "can_share": false 00:29:06.603 } 00:29:06.603 } 00:29:06.603 ], 00:29:06.603 "mp_policy": "active_passive" 00:29:06.603 } 00:29:06.603 } 00:29:06.603 ] 00:29:06.603 12:10:52 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:06.603 12:10:52 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:07.977 137c3f41-ee61-4d88-8598-b5b6668dbdc7 00:29:07.977 12:10:53 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:07.977 7c6fd1a6-eb12-4595-aee5-24653048ffb2 00:29:07.977 12:10:53 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:07.977 12:10:53 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:07.977 12:10:53 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:07.977 12:10:53 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:07.977 12:10:53 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:07.977 12:10:53 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:07.977 12:10:53 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:08.235 12:10:54 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:08.235 [ 00:29:08.235 { 00:29:08.235 "name": "7c6fd1a6-eb12-4595-aee5-24653048ffb2", 00:29:08.235 "aliases": [ 00:29:08.235 "lvs0/lv0" 00:29:08.235 ], 00:29:08.235 "product_name": "Logical Volume", 00:29:08.235 "block_size": 512, 00:29:08.235 "num_blocks": 204800, 00:29:08.235 "uuid": "7c6fd1a6-eb12-4595-aee5-24653048ffb2", 00:29:08.235 "assigned_rate_limits": { 00:29:08.235 "rw_ios_per_sec": 0, 00:29:08.235 "rw_mbytes_per_sec": 0, 00:29:08.235 "r_mbytes_per_sec": 0, 00:29:08.235 "w_mbytes_per_sec": 0 00:29:08.235 }, 00:29:08.235 "claimed": false, 00:29:08.235 "zoned": false, 00:29:08.235 "supported_io_types": { 00:29:08.235 "read": true, 00:29:08.235 "write": true, 00:29:08.235 "unmap": true, 00:29:08.235 "flush": false, 00:29:08.235 "reset": true, 00:29:08.235 "nvme_admin": false, 00:29:08.235 "nvme_io": false, 00:29:08.235 "nvme_io_md": false, 00:29:08.235 "write_zeroes": true, 00:29:08.235 "zcopy": false, 00:29:08.235 "get_zone_info": false, 00:29:08.235 "zone_management": false, 00:29:08.235 "zone_append": false, 00:29:08.235 "compare": false, 00:29:08.235 "compare_and_write": false, 00:29:08.235 "abort": false, 00:29:08.235 "seek_hole": true, 00:29:08.235 "seek_data": true, 00:29:08.235 "copy": false, 00:29:08.235 "nvme_iov_md": false 00:29:08.235 }, 00:29:08.235 "driver_specific": { 00:29:08.235 "lvol": { 00:29:08.235 "lvol_store_uuid": "137c3f41-ee61-4d88-8598-b5b6668dbdc7", 00:29:08.235 "base_bdev": "Nvme0n1", 00:29:08.235 "thin_provision": true, 00:29:08.235 "num_allocated_clusters": 0, 00:29:08.235 "snapshot": false, 00:29:08.235 "clone": false, 00:29:08.235 "esnap_clone": false 00:29:08.235 } 00:29:08.235 } 00:29:08.235 } 00:29:08.236 ] 00:29:08.493 12:10:54 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:08.494 12:10:54 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:08.494 12:10:54 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:08.494 [2024-07-25 12:10:54.568668] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:08.494 COMP_lvs0/lv0 00:29:08.494 12:10:54 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:08.494 12:10:54 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:08.494 12:10:54 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:08.494 12:10:54 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:08.494 12:10:54 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:08.494 12:10:54 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:08.494 12:10:54 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:08.752 12:10:54 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:09.011 [ 00:29:09.011 { 00:29:09.011 "name": "COMP_lvs0/lv0", 00:29:09.011 "aliases": [ 00:29:09.011 "ca0140b7-a005-5371-98f8-a20fe4205126" 00:29:09.011 ], 00:29:09.011 "product_name": "compress", 00:29:09.011 "block_size": 512, 00:29:09.011 "num_blocks": 200704, 00:29:09.011 "uuid": "ca0140b7-a005-5371-98f8-a20fe4205126", 00:29:09.011 "assigned_rate_limits": { 00:29:09.011 "rw_ios_per_sec": 0, 00:29:09.011 "rw_mbytes_per_sec": 0, 00:29:09.011 "r_mbytes_per_sec": 0, 00:29:09.011 "w_mbytes_per_sec": 0 00:29:09.011 }, 00:29:09.011 "claimed": false, 00:29:09.011 "zoned": false, 00:29:09.011 "supported_io_types": { 00:29:09.011 "read": true, 00:29:09.011 "write": true, 00:29:09.011 "unmap": false, 00:29:09.011 "flush": false, 00:29:09.011 "reset": false, 00:29:09.011 "nvme_admin": false, 00:29:09.011 "nvme_io": false, 00:29:09.011 "nvme_io_md": false, 00:29:09.011 "write_zeroes": true, 00:29:09.011 "zcopy": false, 00:29:09.011 "get_zone_info": false, 00:29:09.011 "zone_management": false, 00:29:09.011 "zone_append": false, 00:29:09.011 "compare": false, 00:29:09.011 "compare_and_write": false, 00:29:09.011 "abort": false, 00:29:09.011 "seek_hole": false, 00:29:09.011 "seek_data": false, 00:29:09.011 "copy": false, 00:29:09.011 "nvme_iov_md": false 00:29:09.011 }, 00:29:09.011 "driver_specific": { 00:29:09.011 "compress": { 00:29:09.011 "name": "COMP_lvs0/lv0", 00:29:09.011 "base_bdev_name": "7c6fd1a6-eb12-4595-aee5-24653048ffb2", 00:29:09.011 "pm_path": "/tmp/pmem/105207bc-4980-41e4-9867-667818b39739" 00:29:09.011 } 00:29:09.011 } 00:29:09.011 } 00:29:09.011 ] 00:29:09.011 12:10:55 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:09.011 12:10:55 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:09.269 Running I/O for 3 seconds... 00:29:12.592 00:29:12.592 Latency(us) 00:29:12.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.592 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:12.592 Verification LBA range: start 0x0 length 0x3100 00:29:12.592 COMP_lvs0/lv0 : 3.01 3511.42 13.72 0.00 0.00 9054.76 56.52 14889.78 00:29:12.592 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:12.592 Verification LBA range: start 0x3100 length 0x3100 00:29:12.592 COMP_lvs0/lv0 : 3.01 3519.74 13.75 0.00 0.00 9047.93 54.89 14155.78 00:29:12.592 =================================================================================================================== 00:29:12.593 Total : 7031.15 27.47 0.00 0.00 9051.34 54.89 14889.78 00:29:12.593 0 00:29:12.593 12:10:58 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:29:12.593 12:10:58 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:12.593 12:10:58 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:12.593 12:10:58 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:12.593 12:10:58 compress_isal -- compress/compress.sh@78 -- # killprocess 106961 00:29:12.593 12:10:58 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 106961 ']' 00:29:12.593 12:10:58 compress_isal -- common/autotest_common.sh@954 -- # kill -0 106961 00:29:12.593 12:10:58 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:12.593 12:10:58 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.593 12:10:58 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106961 00:29:12.850 12:10:58 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:12.850 12:10:58 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:12.850 12:10:58 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106961' 00:29:12.850 killing process with pid 106961 00:29:12.850 12:10:58 compress_isal -- common/autotest_common.sh@969 -- # kill 106961 00:29:12.850 Received shutdown signal, test time was about 3.000000 seconds 00:29:12.850 00:29:12.850 Latency(us) 00:29:12.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.850 =================================================================================================================== 00:29:12.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.851 12:10:58 compress_isal -- common/autotest_common.sh@974 -- # wait 106961 00:29:15.383 12:11:01 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:29:15.383 12:11:01 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:15.383 12:11:01 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=109196 00:29:15.383 12:11:01 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:15.383 12:11:01 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:29:15.383 12:11:01 compress_isal -- compress/compress.sh@73 -- # waitforlisten 109196 00:29:15.383 12:11:01 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 109196 ']' 00:29:15.383 12:11:01 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.383 12:11:01 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:15.383 12:11:01 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.383 12:11:01 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:15.383 12:11:01 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:15.383 [2024-07-25 12:11:01.186109] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:15.383 [2024-07-25 12:11:01.186179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109196 ] 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:15.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.383 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:15.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.384 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:15.384 [2024-07-25 12:11:01.307686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.384 [2024-07-25 12:11:01.391732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.384 [2024-07-25 12:11:01.391738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.316 12:11:02 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:16.316 12:11:02 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:16.316 12:11:02 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:29:16.316 12:11:02 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:16.316 12:11:02 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:19.600 12:11:05 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:19.600 12:11:05 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:19.600 12:11:05 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:19.601 12:11:05 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:19.601 12:11:05 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:19.601 12:11:05 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:19.601 12:11:05 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:19.601 12:11:05 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:19.601 [ 00:29:19.601 { 00:29:19.601 "name": "Nvme0n1", 00:29:19.601 "aliases": [ 00:29:19.601 "6647c503-50c7-46c4-bc4b-9d4e5b7383c2" 00:29:19.601 ], 00:29:19.601 "product_name": "NVMe disk", 00:29:19.601 "block_size": 512, 00:29:19.601 "num_blocks": 3907029168, 00:29:19.601 "uuid": "6647c503-50c7-46c4-bc4b-9d4e5b7383c2", 00:29:19.601 "assigned_rate_limits": { 00:29:19.601 "rw_ios_per_sec": 0, 00:29:19.601 "rw_mbytes_per_sec": 0, 00:29:19.601 "r_mbytes_per_sec": 0, 00:29:19.601 "w_mbytes_per_sec": 0 00:29:19.601 }, 00:29:19.601 "claimed": false, 00:29:19.601 "zoned": false, 00:29:19.601 "supported_io_types": { 00:29:19.601 "read": true, 00:29:19.601 "write": true, 00:29:19.601 "unmap": true, 00:29:19.601 "flush": true, 00:29:19.601 "reset": true, 00:29:19.601 "nvme_admin": true, 00:29:19.601 "nvme_io": true, 00:29:19.601 "nvme_io_md": false, 00:29:19.601 "write_zeroes": true, 00:29:19.601 "zcopy": false, 00:29:19.601 "get_zone_info": false, 00:29:19.601 "zone_management": false, 00:29:19.601 "zone_append": false, 00:29:19.601 "compare": false, 00:29:19.601 "compare_and_write": false, 00:29:19.601 "abort": true, 00:29:19.601 "seek_hole": false, 00:29:19.601 "seek_data": false, 00:29:19.601 "copy": false, 00:29:19.601 "nvme_iov_md": false 00:29:19.601 }, 00:29:19.601 "driver_specific": { 00:29:19.601 "nvme": [ 00:29:19.601 { 00:29:19.601 "pci_address": "0000:d8:00.0", 00:29:19.601 "trid": { 00:29:19.601 "trtype": "PCIe", 00:29:19.601 "traddr": "0000:d8:00.0" 00:29:19.601 }, 00:29:19.601 "ctrlr_data": { 00:29:19.601 "cntlid": 0, 00:29:19.601 "vendor_id": "0x8086", 00:29:19.601 "model_number": "INTEL SSDPE2KX020T8", 00:29:19.601 "serial_number": "BTLJ125505KA2P0BGN", 00:29:19.601 "firmware_revision": "VDV10170", 00:29:19.601 "oacs": { 00:29:19.601 "security": 0, 00:29:19.601 "format": 1, 00:29:19.601 "firmware": 1, 00:29:19.601 "ns_manage": 1 00:29:19.601 }, 00:29:19.601 "multi_ctrlr": false, 00:29:19.601 "ana_reporting": false 00:29:19.601 }, 00:29:19.601 "vs": { 00:29:19.601 "nvme_version": "1.2" 00:29:19.601 }, 00:29:19.601 "ns_data": { 00:29:19.601 "id": 1, 00:29:19.601 "can_share": false 00:29:19.601 } 00:29:19.601 } 00:29:19.601 ], 00:29:19.601 "mp_policy": "active_passive" 00:29:19.601 } 00:29:19.601 } 00:29:19.601 ] 00:29:19.601 12:11:05 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:19.601 12:11:05 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:20.976 26325853-23f6-4d8f-a970-c318f47bda21 00:29:20.976 12:11:06 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:21.234 e256cf09-17a9-449c-a221-1430ed3547a5 00:29:21.234 12:11:07 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:21.234 12:11:07 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:21.234 12:11:07 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:21.234 12:11:07 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:21.234 12:11:07 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:21.234 12:11:07 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:21.234 12:11:07 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:21.493 12:11:07 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:21.493 [ 00:29:21.493 { 00:29:21.493 "name": "e256cf09-17a9-449c-a221-1430ed3547a5", 00:29:21.493 "aliases": [ 00:29:21.493 "lvs0/lv0" 00:29:21.493 ], 00:29:21.493 "product_name": "Logical Volume", 00:29:21.493 "block_size": 512, 00:29:21.493 "num_blocks": 204800, 00:29:21.493 "uuid": "e256cf09-17a9-449c-a221-1430ed3547a5", 00:29:21.493 "assigned_rate_limits": { 00:29:21.493 "rw_ios_per_sec": 0, 00:29:21.493 "rw_mbytes_per_sec": 0, 00:29:21.493 "r_mbytes_per_sec": 0, 00:29:21.493 "w_mbytes_per_sec": 0 00:29:21.493 }, 00:29:21.493 "claimed": false, 00:29:21.493 "zoned": false, 00:29:21.493 "supported_io_types": { 00:29:21.493 "read": true, 00:29:21.493 "write": true, 00:29:21.493 "unmap": true, 00:29:21.493 "flush": false, 00:29:21.493 "reset": true, 00:29:21.493 "nvme_admin": false, 00:29:21.493 "nvme_io": false, 00:29:21.493 "nvme_io_md": false, 00:29:21.493 "write_zeroes": true, 00:29:21.493 "zcopy": false, 00:29:21.493 "get_zone_info": false, 00:29:21.493 "zone_management": false, 00:29:21.493 "zone_append": false, 00:29:21.493 "compare": false, 00:29:21.493 "compare_and_write": false, 00:29:21.493 "abort": false, 00:29:21.493 "seek_hole": true, 00:29:21.493 "seek_data": true, 00:29:21.493 "copy": false, 00:29:21.493 "nvme_iov_md": false 00:29:21.493 }, 00:29:21.493 "driver_specific": { 00:29:21.493 "lvol": { 00:29:21.493 "lvol_store_uuid": "26325853-23f6-4d8f-a970-c318f47bda21", 00:29:21.493 "base_bdev": "Nvme0n1", 00:29:21.493 "thin_provision": true, 00:29:21.493 "num_allocated_clusters": 0, 00:29:21.493 "snapshot": false, 00:29:21.493 "clone": false, 00:29:21.493 "esnap_clone": false 00:29:21.493 } 00:29:21.493 } 00:29:21.493 } 00:29:21.493 ] 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:21.752 12:11:07 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:29:21.752 12:11:07 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:29:21.752 [2024-07-25 12:11:07.823839] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:21.752 COMP_lvs0/lv0 00:29:21.752 12:11:07 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:21.752 12:11:07 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:22.011 12:11:08 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:22.270 [ 00:29:22.270 { 00:29:22.270 "name": "COMP_lvs0/lv0", 00:29:22.270 "aliases": [ 00:29:22.270 "8b60b3af-3c11-5fca-8d2f-216e08aeb1c4" 00:29:22.270 ], 00:29:22.270 "product_name": "compress", 00:29:22.270 "block_size": 512, 00:29:22.270 "num_blocks": 200704, 00:29:22.270 "uuid": "8b60b3af-3c11-5fca-8d2f-216e08aeb1c4", 00:29:22.270 "assigned_rate_limits": { 00:29:22.270 "rw_ios_per_sec": 0, 00:29:22.270 "rw_mbytes_per_sec": 0, 00:29:22.270 "r_mbytes_per_sec": 0, 00:29:22.270 "w_mbytes_per_sec": 0 00:29:22.270 }, 00:29:22.270 "claimed": false, 00:29:22.270 "zoned": false, 00:29:22.270 "supported_io_types": { 00:29:22.270 "read": true, 00:29:22.270 "write": true, 00:29:22.270 "unmap": false, 00:29:22.270 "flush": false, 00:29:22.270 "reset": false, 00:29:22.270 "nvme_admin": false, 00:29:22.270 "nvme_io": false, 00:29:22.270 "nvme_io_md": false, 00:29:22.270 "write_zeroes": true, 00:29:22.270 "zcopy": false, 00:29:22.270 "get_zone_info": false, 00:29:22.270 "zone_management": false, 00:29:22.270 "zone_append": false, 00:29:22.270 "compare": false, 00:29:22.270 "compare_and_write": false, 00:29:22.270 "abort": false, 00:29:22.270 "seek_hole": false, 00:29:22.270 "seek_data": false, 00:29:22.270 "copy": false, 00:29:22.270 "nvme_iov_md": false 00:29:22.270 }, 00:29:22.270 "driver_specific": { 00:29:22.270 "compress": { 00:29:22.270 "name": "COMP_lvs0/lv0", 00:29:22.270 "base_bdev_name": "e256cf09-17a9-449c-a221-1430ed3547a5", 00:29:22.270 "pm_path": "/tmp/pmem/982772f1-8a02-4a9e-916d-0db773b506af" 00:29:22.270 } 00:29:22.270 } 00:29:22.270 } 00:29:22.270 ] 00:29:22.270 12:11:08 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:22.270 12:11:08 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:22.529 Running I/O for 3 seconds... 00:29:25.815 00:29:25.815 Latency(us) 00:29:25.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.815 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:25.815 Verification LBA range: start 0x0 length 0x3100 00:29:25.815 COMP_lvs0/lv0 : 3.01 3415.02 13.34 0.00 0.00 9313.64 58.98 16777.22 00:29:25.815 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:25.815 Verification LBA range: start 0x3100 length 0x3100 00:29:25.815 COMP_lvs0/lv0 : 3.01 3450.95 13.48 0.00 0.00 9231.78 55.30 16777.22 00:29:25.815 =================================================================================================================== 00:29:25.815 Total : 6865.96 26.82 0.00 0.00 9272.51 55.30 16777.22 00:29:25.815 0 00:29:25.815 12:11:11 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:29:25.815 12:11:11 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:25.815 12:11:11 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:26.073 12:11:11 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:26.073 12:11:11 compress_isal -- compress/compress.sh@78 -- # killprocess 109196 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 109196 ']' 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@954 -- # kill -0 109196 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109196 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:26.073 12:11:11 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109196' 00:29:26.073 killing process with pid 109196 00:29:26.074 12:11:11 compress_isal -- common/autotest_common.sh@969 -- # kill 109196 00:29:26.074 Received shutdown signal, test time was about 3.000000 seconds 00:29:26.074 00:29:26.074 Latency(us) 00:29:26.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.074 =================================================================================================================== 00:29:26.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.074 12:11:11 compress_isal -- common/autotest_common.sh@974 -- # wait 109196 00:29:28.608 12:11:14 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:29:28.608 12:11:14 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:28.608 12:11:14 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=111373 00:29:28.608 12:11:14 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:28.608 12:11:14 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:29:28.608 12:11:14 compress_isal -- compress/compress.sh@73 -- # waitforlisten 111373 00:29:28.608 12:11:14 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 111373 ']' 00:29:28.608 12:11:14 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.608 12:11:14 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:28.608 12:11:14 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.608 12:11:14 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:28.608 12:11:14 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:28.608 [2024-07-25 12:11:14.488565] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:28.608 [2024-07-25 12:11:14.488629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111373 ] 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:28.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.608 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:28.608 [2024-07-25 12:11:14.609315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:28.608 [2024-07-25 12:11:14.697185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.608 [2024-07-25 12:11:14.697187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.656 12:11:15 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:29.656 12:11:15 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:29.656 12:11:15 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:29:29.656 12:11:15 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:29.656 12:11:15 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:32.944 12:11:18 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:32.944 12:11:18 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:32.944 [ 00:29:32.944 { 00:29:32.944 "name": "Nvme0n1", 00:29:32.944 "aliases": [ 00:29:32.944 "ac6d1f3b-bd04-4726-8efd-268aafdf5e23" 00:29:32.944 ], 00:29:32.944 "product_name": "NVMe disk", 00:29:32.944 "block_size": 512, 00:29:32.944 "num_blocks": 3907029168, 00:29:32.944 "uuid": "ac6d1f3b-bd04-4726-8efd-268aafdf5e23", 00:29:32.944 "assigned_rate_limits": { 00:29:32.944 "rw_ios_per_sec": 0, 00:29:32.944 "rw_mbytes_per_sec": 0, 00:29:32.944 "r_mbytes_per_sec": 0, 00:29:32.944 "w_mbytes_per_sec": 0 00:29:32.944 }, 00:29:32.944 "claimed": false, 00:29:32.944 "zoned": false, 00:29:32.944 "supported_io_types": { 00:29:32.944 "read": true, 00:29:32.944 "write": true, 00:29:32.944 "unmap": true, 00:29:32.944 "flush": true, 00:29:32.944 "reset": true, 00:29:32.944 "nvme_admin": true, 00:29:32.944 "nvme_io": true, 00:29:32.944 "nvme_io_md": false, 00:29:32.944 "write_zeroes": true, 00:29:32.944 "zcopy": false, 00:29:32.944 "get_zone_info": false, 00:29:32.944 "zone_management": false, 00:29:32.944 "zone_append": false, 00:29:32.944 "compare": false, 00:29:32.944 "compare_and_write": false, 00:29:32.944 "abort": true, 00:29:32.944 "seek_hole": false, 00:29:32.944 "seek_data": false, 00:29:32.944 "copy": false, 00:29:32.944 "nvme_iov_md": false 00:29:32.944 }, 00:29:32.944 "driver_specific": { 00:29:32.944 "nvme": [ 00:29:32.944 { 00:29:32.944 "pci_address": "0000:d8:00.0", 00:29:32.944 "trid": { 00:29:32.944 "trtype": "PCIe", 00:29:32.944 "traddr": "0000:d8:00.0" 00:29:32.944 }, 00:29:32.944 "ctrlr_data": { 00:29:32.944 "cntlid": 0, 00:29:32.944 "vendor_id": "0x8086", 00:29:32.944 "model_number": "INTEL SSDPE2KX020T8", 00:29:32.944 "serial_number": "BTLJ125505KA2P0BGN", 00:29:32.944 "firmware_revision": "VDV10170", 00:29:32.944 "oacs": { 00:29:32.944 "security": 0, 00:29:32.944 "format": 1, 00:29:32.944 "firmware": 1, 00:29:32.944 "ns_manage": 1 00:29:32.944 }, 00:29:32.944 "multi_ctrlr": false, 00:29:32.944 "ana_reporting": false 00:29:32.945 }, 00:29:32.945 "vs": { 00:29:32.945 "nvme_version": "1.2" 00:29:32.945 }, 00:29:32.945 "ns_data": { 00:29:32.945 "id": 1, 00:29:32.945 "can_share": false 00:29:32.945 } 00:29:32.945 } 00:29:32.945 ], 00:29:32.945 "mp_policy": "active_passive" 00:29:32.945 } 00:29:32.945 } 00:29:32.945 ] 00:29:32.945 12:11:18 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:32.945 12:11:18 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:34.345 1d4b8302-2d38-4cf5-8702-dd867b5ead38 00:29:34.345 12:11:20 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:34.345 471575e9-45ef-464d-8a11-2216860d27f7 00:29:34.345 12:11:20 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:34.345 12:11:20 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:34.345 12:11:20 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:34.345 12:11:20 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:34.345 12:11:20 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:34.345 12:11:20 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:34.345 12:11:20 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:34.605 12:11:20 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:34.864 [ 00:29:34.864 { 00:29:34.864 "name": "471575e9-45ef-464d-8a11-2216860d27f7", 00:29:34.864 "aliases": [ 00:29:34.864 "lvs0/lv0" 00:29:34.864 ], 00:29:34.864 "product_name": "Logical Volume", 00:29:34.864 "block_size": 512, 00:29:34.864 "num_blocks": 204800, 00:29:34.864 "uuid": "471575e9-45ef-464d-8a11-2216860d27f7", 00:29:34.864 "assigned_rate_limits": { 00:29:34.864 "rw_ios_per_sec": 0, 00:29:34.864 "rw_mbytes_per_sec": 0, 00:29:34.864 "r_mbytes_per_sec": 0, 00:29:34.864 "w_mbytes_per_sec": 0 00:29:34.864 }, 00:29:34.864 "claimed": false, 00:29:34.864 "zoned": false, 00:29:34.864 "supported_io_types": { 00:29:34.864 "read": true, 00:29:34.864 "write": true, 00:29:34.864 "unmap": true, 00:29:34.864 "flush": false, 00:29:34.864 "reset": true, 00:29:34.864 "nvme_admin": false, 00:29:34.864 "nvme_io": false, 00:29:34.864 "nvme_io_md": false, 00:29:34.864 "write_zeroes": true, 00:29:34.864 "zcopy": false, 00:29:34.864 "get_zone_info": false, 00:29:34.864 "zone_management": false, 00:29:34.864 "zone_append": false, 00:29:34.864 "compare": false, 00:29:34.864 "compare_and_write": false, 00:29:34.864 "abort": false, 00:29:34.864 "seek_hole": true, 00:29:34.864 "seek_data": true, 00:29:34.864 "copy": false, 00:29:34.864 "nvme_iov_md": false 00:29:34.864 }, 00:29:34.864 "driver_specific": { 00:29:34.864 "lvol": { 00:29:34.864 "lvol_store_uuid": "1d4b8302-2d38-4cf5-8702-dd867b5ead38", 00:29:34.864 "base_bdev": "Nvme0n1", 00:29:34.864 "thin_provision": true, 00:29:34.864 "num_allocated_clusters": 0, 00:29:34.864 "snapshot": false, 00:29:34.864 "clone": false, 00:29:34.864 "esnap_clone": false 00:29:34.864 } 00:29:34.864 } 00:29:34.864 } 00:29:34.864 ] 00:29:34.864 12:11:20 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:34.864 12:11:20 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:29:34.864 12:11:20 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:29:35.122 [2024-07-25 12:11:21.064064] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:35.122 COMP_lvs0/lv0 00:29:35.122 12:11:21 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:35.122 12:11:21 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:35.122 12:11:21 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:35.122 12:11:21 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:35.122 12:11:21 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:35.122 12:11:21 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:35.122 12:11:21 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:35.381 12:11:21 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:35.640 [ 00:29:35.640 { 00:29:35.640 "name": "COMP_lvs0/lv0", 00:29:35.640 "aliases": [ 00:29:35.640 "d452926f-662e-5ecf-bbd8-d284829eba89" 00:29:35.640 ], 00:29:35.640 "product_name": "compress", 00:29:35.640 "block_size": 4096, 00:29:35.640 "num_blocks": 25088, 00:29:35.640 "uuid": "d452926f-662e-5ecf-bbd8-d284829eba89", 00:29:35.640 "assigned_rate_limits": { 00:29:35.640 "rw_ios_per_sec": 0, 00:29:35.640 "rw_mbytes_per_sec": 0, 00:29:35.640 "r_mbytes_per_sec": 0, 00:29:35.640 "w_mbytes_per_sec": 0 00:29:35.640 }, 00:29:35.640 "claimed": false, 00:29:35.640 "zoned": false, 00:29:35.640 "supported_io_types": { 00:29:35.640 "read": true, 00:29:35.640 "write": true, 00:29:35.640 "unmap": false, 00:29:35.640 "flush": false, 00:29:35.640 "reset": false, 00:29:35.640 "nvme_admin": false, 00:29:35.640 "nvme_io": false, 00:29:35.640 "nvme_io_md": false, 00:29:35.640 "write_zeroes": true, 00:29:35.640 "zcopy": false, 00:29:35.640 "get_zone_info": false, 00:29:35.640 "zone_management": false, 00:29:35.640 "zone_append": false, 00:29:35.640 "compare": false, 00:29:35.640 "compare_and_write": false, 00:29:35.640 "abort": false, 00:29:35.640 "seek_hole": false, 00:29:35.640 "seek_data": false, 00:29:35.640 "copy": false, 00:29:35.640 "nvme_iov_md": false 00:29:35.640 }, 00:29:35.640 "driver_specific": { 00:29:35.640 "compress": { 00:29:35.640 "name": "COMP_lvs0/lv0", 00:29:35.640 "base_bdev_name": "471575e9-45ef-464d-8a11-2216860d27f7", 00:29:35.640 "pm_path": "/tmp/pmem/36e3680a-54e8-4c85-87f0-5881413601cd" 00:29:35.640 } 00:29:35.640 } 00:29:35.640 } 00:29:35.640 ] 00:29:35.640 12:11:21 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:35.640 12:11:21 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:35.640 Running I/O for 3 seconds... 00:29:38.922 00:29:38.922 Latency(us) 00:29:38.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.922 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:38.922 Verification LBA range: start 0x0 length 0x3100 00:29:38.922 COMP_lvs0/lv0 : 3.01 3551.01 13.87 0.00 0.00 8958.39 58.57 14155.78 00:29:38.922 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:38.922 Verification LBA range: start 0x3100 length 0x3100 00:29:38.922 COMP_lvs0/lv0 : 3.01 3529.99 13.79 0.00 0.00 9019.70 57.75 14155.78 00:29:38.923 =================================================================================================================== 00:29:38.923 Total : 7081.00 27.66 0.00 0.00 8988.96 57.75 14155.78 00:29:38.923 0 00:29:38.923 12:11:24 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:29:38.923 12:11:24 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:38.923 12:11:24 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:39.181 12:11:25 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:39.181 12:11:25 compress_isal -- compress/compress.sh@78 -- # killprocess 111373 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 111373 ']' 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@954 -- # kill -0 111373 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111373 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111373' 00:29:39.181 killing process with pid 111373 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@969 -- # kill 111373 00:29:39.181 Received shutdown signal, test time was about 3.000000 seconds 00:29:39.181 00:29:39.181 Latency(us) 00:29:39.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.181 =================================================================================================================== 00:29:39.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.181 12:11:25 compress_isal -- common/autotest_common.sh@974 -- # wait 111373 00:29:41.712 12:11:27 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:29:41.712 12:11:27 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:41.712 12:11:27 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=113513 00:29:41.712 12:11:27 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:41.713 12:11:27 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:29:41.713 12:11:27 compress_isal -- compress/compress.sh@57 -- # waitforlisten 113513 00:29:41.713 12:11:27 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 113513 ']' 00:29:41.713 12:11:27 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.713 12:11:27 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.713 12:11:27 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.713 12:11:27 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.713 12:11:27 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:41.713 [2024-07-25 12:11:27.722522] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:41.713 [2024-07-25 12:11:27.722584] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113513 ] 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:41.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:41.713 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:41.971 [2024-07-25 12:11:27.857113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.971 [2024-07-25 12:11:27.948292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.972 [2024-07-25 12:11:27.948387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.972 [2024-07-25 12:11:27.948392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.539 12:11:28 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.539 12:11:28 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:42.539 12:11:28 compress_isal -- compress/compress.sh@58 -- # create_vols 00:29:42.539 12:11:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:42.539 12:11:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:45.824 12:11:31 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:45.824 12:11:31 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:45.824 12:11:31 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:45.824 12:11:31 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:45.824 12:11:31 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:45.824 12:11:31 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:45.824 12:11:31 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:46.082 12:11:31 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:46.341 [ 00:29:46.341 { 00:29:46.341 "name": "Nvme0n1", 00:29:46.341 "aliases": [ 00:29:46.341 "b61c3867-7b22-44b2-abd0-625fe99e8a3f" 00:29:46.341 ], 00:29:46.341 "product_name": "NVMe disk", 00:29:46.341 "block_size": 512, 00:29:46.341 "num_blocks": 3907029168, 00:29:46.341 "uuid": "b61c3867-7b22-44b2-abd0-625fe99e8a3f", 00:29:46.341 "assigned_rate_limits": { 00:29:46.341 "rw_ios_per_sec": 0, 00:29:46.341 "rw_mbytes_per_sec": 0, 00:29:46.341 "r_mbytes_per_sec": 0, 00:29:46.341 "w_mbytes_per_sec": 0 00:29:46.341 }, 00:29:46.341 "claimed": false, 00:29:46.341 "zoned": false, 00:29:46.341 "supported_io_types": { 00:29:46.341 "read": true, 00:29:46.341 "write": true, 00:29:46.341 "unmap": true, 00:29:46.341 "flush": true, 00:29:46.341 "reset": true, 00:29:46.341 "nvme_admin": true, 00:29:46.341 "nvme_io": true, 00:29:46.341 "nvme_io_md": false, 00:29:46.341 "write_zeroes": true, 00:29:46.341 "zcopy": false, 00:29:46.341 "get_zone_info": false, 00:29:46.341 "zone_management": false, 00:29:46.341 "zone_append": false, 00:29:46.341 "compare": false, 00:29:46.341 "compare_and_write": false, 00:29:46.341 "abort": true, 00:29:46.341 "seek_hole": false, 00:29:46.341 "seek_data": false, 00:29:46.341 "copy": false, 00:29:46.341 "nvme_iov_md": false 00:29:46.341 }, 00:29:46.341 "driver_specific": { 00:29:46.341 "nvme": [ 00:29:46.341 { 00:29:46.341 "pci_address": "0000:d8:00.0", 00:29:46.341 "trid": { 00:29:46.341 "trtype": "PCIe", 00:29:46.341 "traddr": "0000:d8:00.0" 00:29:46.341 }, 00:29:46.341 "ctrlr_data": { 00:29:46.341 "cntlid": 0, 00:29:46.341 "vendor_id": "0x8086", 00:29:46.341 "model_number": "INTEL SSDPE2KX020T8", 00:29:46.341 "serial_number": "BTLJ125505KA2P0BGN", 00:29:46.341 "firmware_revision": "VDV10170", 00:29:46.341 "oacs": { 00:29:46.341 "security": 0, 00:29:46.341 "format": 1, 00:29:46.341 "firmware": 1, 00:29:46.341 "ns_manage": 1 00:29:46.341 }, 00:29:46.341 "multi_ctrlr": false, 00:29:46.341 "ana_reporting": false 00:29:46.341 }, 00:29:46.341 "vs": { 00:29:46.341 "nvme_version": "1.2" 00:29:46.341 }, 00:29:46.341 "ns_data": { 00:29:46.341 "id": 1, 00:29:46.341 "can_share": false 00:29:46.341 } 00:29:46.341 } 00:29:46.341 ], 00:29:46.341 "mp_policy": "active_passive" 00:29:46.341 } 00:29:46.341 } 00:29:46.341 ] 00:29:46.341 12:11:32 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:46.341 12:11:32 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:47.716 ed77e305-6084-4933-8ada-f8b00f940f5b 00:29:47.716 12:11:33 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:47.716 2c8dfaf5-ad4b-4ac8-ab1e-2d342bd98e1c 00:29:47.716 12:11:33 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:47.716 12:11:33 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:47.716 12:11:33 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:47.716 12:11:33 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:47.716 12:11:33 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:47.716 12:11:33 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:47.716 12:11:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:47.975 12:11:33 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:48.234 [ 00:29:48.234 { 00:29:48.234 "name": "2c8dfaf5-ad4b-4ac8-ab1e-2d342bd98e1c", 00:29:48.234 "aliases": [ 00:29:48.234 "lvs0/lv0" 00:29:48.234 ], 00:29:48.234 "product_name": "Logical Volume", 00:29:48.234 "block_size": 512, 00:29:48.234 "num_blocks": 204800, 00:29:48.234 "uuid": "2c8dfaf5-ad4b-4ac8-ab1e-2d342bd98e1c", 00:29:48.234 "assigned_rate_limits": { 00:29:48.234 "rw_ios_per_sec": 0, 00:29:48.234 "rw_mbytes_per_sec": 0, 00:29:48.234 "r_mbytes_per_sec": 0, 00:29:48.234 "w_mbytes_per_sec": 0 00:29:48.234 }, 00:29:48.234 "claimed": false, 00:29:48.234 "zoned": false, 00:29:48.234 "supported_io_types": { 00:29:48.234 "read": true, 00:29:48.234 "write": true, 00:29:48.234 "unmap": true, 00:29:48.234 "flush": false, 00:29:48.234 "reset": true, 00:29:48.234 "nvme_admin": false, 00:29:48.234 "nvme_io": false, 00:29:48.234 "nvme_io_md": false, 00:29:48.234 "write_zeroes": true, 00:29:48.234 "zcopy": false, 00:29:48.234 "get_zone_info": false, 00:29:48.234 "zone_management": false, 00:29:48.234 "zone_append": false, 00:29:48.234 "compare": false, 00:29:48.234 "compare_and_write": false, 00:29:48.234 "abort": false, 00:29:48.234 "seek_hole": true, 00:29:48.234 "seek_data": true, 00:29:48.234 "copy": false, 00:29:48.234 "nvme_iov_md": false 00:29:48.234 }, 00:29:48.234 "driver_specific": { 00:29:48.234 "lvol": { 00:29:48.234 "lvol_store_uuid": "ed77e305-6084-4933-8ada-f8b00f940f5b", 00:29:48.234 "base_bdev": "Nvme0n1", 00:29:48.234 "thin_provision": true, 00:29:48.234 "num_allocated_clusters": 0, 00:29:48.234 "snapshot": false, 00:29:48.234 "clone": false, 00:29:48.234 "esnap_clone": false 00:29:48.234 } 00:29:48.234 } 00:29:48.234 } 00:29:48.234 ] 00:29:48.234 12:11:34 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:48.234 12:11:34 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:48.234 12:11:34 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:48.530 [2024-07-25 12:11:34.427490] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:48.530 COMP_lvs0/lv0 00:29:48.530 12:11:34 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:48.530 12:11:34 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:48.530 12:11:34 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:48.530 12:11:34 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:48.530 12:11:34 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:48.530 12:11:34 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:48.530 12:11:34 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:48.789 12:11:34 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:48.789 [ 00:29:48.789 { 00:29:48.789 "name": "COMP_lvs0/lv0", 00:29:48.789 "aliases": [ 00:29:48.789 "8ba3b6fd-e226-5816-a6ae-acaaa17ecfd6" 00:29:48.789 ], 00:29:48.789 "product_name": "compress", 00:29:48.789 "block_size": 512, 00:29:48.789 "num_blocks": 200704, 00:29:48.789 "uuid": "8ba3b6fd-e226-5816-a6ae-acaaa17ecfd6", 00:29:48.789 "assigned_rate_limits": { 00:29:48.789 "rw_ios_per_sec": 0, 00:29:48.789 "rw_mbytes_per_sec": 0, 00:29:48.789 "r_mbytes_per_sec": 0, 00:29:48.789 "w_mbytes_per_sec": 0 00:29:48.789 }, 00:29:48.789 "claimed": false, 00:29:48.789 "zoned": false, 00:29:48.789 "supported_io_types": { 00:29:48.789 "read": true, 00:29:48.789 "write": true, 00:29:48.789 "unmap": false, 00:29:48.789 "flush": false, 00:29:48.789 "reset": false, 00:29:48.789 "nvme_admin": false, 00:29:48.789 "nvme_io": false, 00:29:48.789 "nvme_io_md": false, 00:29:48.789 "write_zeroes": true, 00:29:48.789 "zcopy": false, 00:29:48.789 "get_zone_info": false, 00:29:48.789 "zone_management": false, 00:29:48.789 "zone_append": false, 00:29:48.789 "compare": false, 00:29:48.789 "compare_and_write": false, 00:29:48.789 "abort": false, 00:29:48.789 "seek_hole": false, 00:29:48.789 "seek_data": false, 00:29:48.789 "copy": false, 00:29:48.789 "nvme_iov_md": false 00:29:48.789 }, 00:29:48.789 "driver_specific": { 00:29:48.789 "compress": { 00:29:48.790 "name": "COMP_lvs0/lv0", 00:29:48.790 "base_bdev_name": "2c8dfaf5-ad4b-4ac8-ab1e-2d342bd98e1c", 00:29:48.790 "pm_path": "/tmp/pmem/e4b8358d-8efc-4ba7-a114-964cc2b30b60" 00:29:48.790 } 00:29:48.790 } 00:29:48.790 } 00:29:48.790 ] 00:29:48.790 12:11:34 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:48.790 12:11:34 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:49.048 I/O targets: 00:29:49.048 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:29:49.048 00:29:49.048 00:29:49.048 CUnit - A unit testing framework for C - Version 2.1-3 00:29:49.048 http://cunit.sourceforge.net/ 00:29:49.048 00:29:49.048 00:29:49.048 Suite: bdevio tests on: COMP_lvs0/lv0 00:29:49.048 Test: blockdev write read block ...passed 00:29:49.048 Test: blockdev write zeroes read block ...passed 00:29:49.048 Test: blockdev write zeroes read no split ...passed 00:29:49.048 Test: blockdev write zeroes read split ...passed 00:29:49.048 Test: blockdev write zeroes read split partial ...passed 00:29:49.048 Test: blockdev reset ...[2024-07-25 12:11:35.052522] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:29:49.048 passed 00:29:49.048 Test: blockdev write read 8 blocks ...passed 00:29:49.048 Test: blockdev write read size > 128k ...passed 00:29:49.048 Test: blockdev write read invalid size ...passed 00:29:49.048 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:49.048 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:49.048 Test: blockdev write read max offset ...passed 00:29:49.048 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:49.048 Test: blockdev writev readv 8 blocks ...passed 00:29:49.048 Test: blockdev writev readv 30 x 1block ...passed 00:29:49.048 Test: blockdev writev readv block ...passed 00:29:49.048 Test: blockdev writev readv size > 128k ...passed 00:29:49.048 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:49.048 Test: blockdev comparev and writev ...passed 00:29:49.048 Test: blockdev nvme passthru rw ...passed 00:29:49.048 Test: blockdev nvme passthru vendor specific ...passed 00:29:49.048 Test: blockdev nvme admin passthru ...passed 00:29:49.048 Test: blockdev copy ...passed 00:29:49.048 00:29:49.049 Run Summary: Type Total Ran Passed Failed Inactive 00:29:49.049 suites 1 1 n/a 0 0 00:29:49.049 tests 23 23 23 0 0 00:29:49.049 asserts 130 130 130 0 n/a 00:29:49.049 00:29:49.049 Elapsed time = 0.171 seconds 00:29:49.049 0 00:29:49.049 12:11:35 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:29:49.049 12:11:35 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:49.307 12:11:35 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:49.566 12:11:35 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:29:49.566 12:11:35 compress_isal -- compress/compress.sh@62 -- # killprocess 113513 00:29:49.566 12:11:35 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 113513 ']' 00:29:49.566 12:11:35 compress_isal -- common/autotest_common.sh@954 -- # kill -0 113513 00:29:49.566 12:11:35 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:49.566 12:11:35 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:49.567 12:11:35 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113513 00:29:49.567 12:11:35 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:49.567 12:11:35 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:49.567 12:11:35 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113513' 00:29:49.567 killing process with pid 113513 00:29:49.567 12:11:35 compress_isal -- common/autotest_common.sh@969 -- # kill 113513 00:29:49.567 12:11:35 compress_isal -- common/autotest_common.sh@974 -- # wait 113513 00:29:52.099 12:11:38 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:29:52.099 12:11:38 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:29:52.099 00:29:52.099 real 0m50.251s 00:29:52.099 user 1m55.042s 00:29:52.099 sys 0m4.047s 00:29:52.099 12:11:38 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.099 12:11:38 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:52.099 ************************************ 00:29:52.099 END TEST compress_isal 00:29:52.099 ************************************ 00:29:52.099 12:11:38 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:52.099 12:11:38 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:29:52.099 12:11:38 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:29:52.099 12:11:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:52.099 12:11:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.099 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:29:52.099 ************************************ 00:29:52.099 START TEST blockdev_crypto_aesni 00:29:52.099 ************************************ 00:29:52.099 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:29:52.359 * Looking for test storage... 00:29:52.359 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=115329 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:29:52.359 12:11:38 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 115329 00:29:52.359 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 115329 ']' 00:29:52.359 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.359 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.359 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.359 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.359 12:11:38 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:52.359 [2024-07-25 12:11:38.318631] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:52.359 [2024-07-25 12:11:38.318694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115329 ] 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:52.359 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:52.359 [2024-07-25 12:11:38.451113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.619 [2024-07-25 12:11:38.539210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.185 12:11:39 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.185 12:11:39 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:29:53.186 12:11:39 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:53.186 12:11:39 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:29:53.186 12:11:39 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:29:53.186 12:11:39 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.186 12:11:39 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:53.186 [2024-07-25 12:11:39.217332] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:53.186 [2024-07-25 12:11:39.225360] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:53.186 [2024-07-25 12:11:39.233378] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:53.186 [2024-07-25 12:11:39.300490] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:55.772 true 00:29:55.772 true 00:29:55.772 true 00:29:55.772 true 00:29:55.772 Malloc0 00:29:55.772 Malloc1 00:29:55.772 Malloc2 00:29:55.772 Malloc3 00:29:55.772 [2024-07-25 12:11:41.639914] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:55.772 crypto_ram 00:29:55.772 [2024-07-25 12:11:41.647934] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:55.772 crypto_ram2 00:29:55.772 [2024-07-25 12:11:41.655957] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:55.772 crypto_ram3 00:29:55.772 [2024-07-25 12:11:41.663979] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:55.772 crypto_ram4 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:55.772 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:55.772 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:55.773 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9f77bd07-dfbf-5fed-baf0-3877e8e5d825"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9f77bd07-dfbf-5fed-baf0-3877e8e5d825",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "716e1e72-9312-5774-8e98-df9e3379bdbf"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "716e1e72-9312-5774-8e98-df9e3379bdbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "87a5dbb9-238a-5173-ac31-cdb6ff866b25"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "87a5dbb9-238a-5173-ac31-cdb6ff866b25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b9a62391-7c1f-587d-a960-e4103d75ac3f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b9a62391-7c1f-587d-a960-e4103d75ac3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:29:56.032 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:56.032 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:29:56.032 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:56.032 12:11:41 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 115329 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 115329 ']' 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 115329 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115329 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115329' 00:29:56.032 killing process with pid 115329 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 115329 00:29:56.032 12:11:41 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 115329 00:29:56.601 12:11:42 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:56.601 12:11:42 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:29:56.601 12:11:42 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:56.601 12:11:42 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:56.601 12:11:42 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:56.601 ************************************ 00:29:56.601 START TEST bdev_hello_world 00:29:56.601 ************************************ 00:29:56.601 12:11:42 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:29:56.601 [2024-07-25 12:11:42.520858] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:56.601 [2024-07-25 12:11:42.520912] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115993 ] 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:56.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:56.601 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:56.601 [2024-07-25 12:11:42.651697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.861 [2024-07-25 12:11:42.734784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.861 [2024-07-25 12:11:42.756004] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:56.861 [2024-07-25 12:11:42.764030] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:56.861 [2024-07-25 12:11:42.772049] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:56.861 [2024-07-25 12:11:42.881311] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:59.393 [2024-07-25 12:11:45.058466] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:59.393 [2024-07-25 12:11:45.058521] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:59.393 [2024-07-25 12:11:45.058534] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:59.393 [2024-07-25 12:11:45.066485] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:59.393 [2024-07-25 12:11:45.066504] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:59.393 [2024-07-25 12:11:45.066515] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:59.393 [2024-07-25 12:11:45.074505] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:59.393 [2024-07-25 12:11:45.074521] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:59.393 [2024-07-25 12:11:45.074532] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:59.393 [2024-07-25 12:11:45.082525] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:59.393 [2024-07-25 12:11:45.082541] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:59.393 [2024-07-25 12:11:45.082552] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:59.393 [2024-07-25 12:11:45.153594] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:59.393 [2024-07-25 12:11:45.153635] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:29:59.393 [2024-07-25 12:11:45.153652] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:59.393 [2024-07-25 12:11:45.154822] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:59.393 [2024-07-25 12:11:45.154885] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:59.393 [2024-07-25 12:11:45.154900] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:59.393 [2024-07-25 12:11:45.154944] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:59.393 00:29:59.393 [2024-07-25 12:11:45.154962] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:59.393 00:29:59.393 real 0m3.015s 00:29:59.393 user 0m2.639s 00:29:59.393 sys 0m0.335s 00:29:59.393 12:11:45 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.393 12:11:45 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:59.393 ************************************ 00:29:59.393 END TEST bdev_hello_world 00:29:59.393 ************************************ 00:29:59.653 12:11:45 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:59.653 12:11:45 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:59.653 12:11:45 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.653 12:11:45 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:59.653 ************************************ 00:29:59.653 START TEST bdev_bounds 00:29:59.653 ************************************ 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=116535 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 116535' 00:29:59.653 Process bdevio pid: 116535 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 116535 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 116535 ']' 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.653 12:11:45 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:59.653 [2024-07-25 12:11:45.618173] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:59.653 [2024-07-25 12:11:45.618231] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116535 ] 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:59.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.653 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:59.653 [2024-07-25 12:11:45.751264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:59.912 [2024-07-25 12:11:45.840973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.912 [2024-07-25 12:11:45.841069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.912 [2024-07-25 12:11:45.841074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.912 [2024-07-25 12:11:45.862384] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:59.912 [2024-07-25 12:11:45.870405] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:59.912 [2024-07-25 12:11:45.878423] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:59.912 [2024-07-25 12:11:45.976443] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:02.447 [2024-07-25 12:11:48.147020] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:02.447 [2024-07-25 12:11:48.147094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:02.447 [2024-07-25 12:11:48.147108] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:02.447 [2024-07-25 12:11:48.155037] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:02.447 [2024-07-25 12:11:48.155056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:02.447 [2024-07-25 12:11:48.155067] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:02.447 [2024-07-25 12:11:48.163061] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:02.447 [2024-07-25 12:11:48.163078] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:02.447 [2024-07-25 12:11:48.163088] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:02.447 [2024-07-25 12:11:48.171085] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:02.447 [2024-07-25 12:11:48.171102] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:02.447 [2024-07-25 12:11:48.171112] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:02.447 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.447 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:30:02.447 12:11:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:02.447 I/O targets: 00:30:02.447 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:30:02.447 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:30:02.447 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:30:02.447 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:30:02.447 00:30:02.447 00:30:02.447 CUnit - A unit testing framework for C - Version 2.1-3 00:30:02.447 http://cunit.sourceforge.net/ 00:30:02.447 00:30:02.447 00:30:02.447 Suite: bdevio tests on: crypto_ram4 00:30:02.447 Test: blockdev write read block ...passed 00:30:02.447 Test: blockdev write zeroes read block ...passed 00:30:02.447 Test: blockdev write zeroes read no split ...passed 00:30:02.447 Test: blockdev write zeroes read split ...passed 00:30:02.447 Test: blockdev write zeroes read split partial ...passed 00:30:02.447 Test: blockdev reset ...passed 00:30:02.447 Test: blockdev write read 8 blocks ...passed 00:30:02.447 Test: blockdev write read size > 128k ...passed 00:30:02.447 Test: blockdev write read invalid size ...passed 00:30:02.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.447 Test: blockdev write read max offset ...passed 00:30:02.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:02.447 Test: blockdev writev readv 8 blocks ...passed 00:30:02.447 Test: blockdev writev readv 30 x 1block ...passed 00:30:02.447 Test: blockdev writev readv block ...passed 00:30:02.447 Test: blockdev writev readv size > 128k ...passed 00:30:02.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:02.447 Test: blockdev comparev and writev ...passed 00:30:02.447 Test: blockdev nvme passthru rw ...passed 00:30:02.447 Test: blockdev nvme passthru vendor specific ...passed 00:30:02.447 Test: blockdev nvme admin passthru ...passed 00:30:02.447 Test: blockdev copy ...passed 00:30:02.447 Suite: bdevio tests on: crypto_ram3 00:30:02.447 Test: blockdev write read block ...passed 00:30:02.447 Test: blockdev write zeroes read block ...passed 00:30:02.447 Test: blockdev write zeroes read no split ...passed 00:30:02.447 Test: blockdev write zeroes read split ...passed 00:30:02.447 Test: blockdev write zeroes read split partial ...passed 00:30:02.447 Test: blockdev reset ...passed 00:30:02.447 Test: blockdev write read 8 blocks ...passed 00:30:02.447 Test: blockdev write read size > 128k ...passed 00:30:02.447 Test: blockdev write read invalid size ...passed 00:30:02.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.447 Test: blockdev write read max offset ...passed 00:30:02.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:02.447 Test: blockdev writev readv 8 blocks ...passed 00:30:02.447 Test: blockdev writev readv 30 x 1block ...passed 00:30:02.447 Test: blockdev writev readv block ...passed 00:30:02.447 Test: blockdev writev readv size > 128k ...passed 00:30:02.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:02.447 Test: blockdev comparev and writev ...passed 00:30:02.447 Test: blockdev nvme passthru rw ...passed 00:30:02.447 Test: blockdev nvme passthru vendor specific ...passed 00:30:02.447 Test: blockdev nvme admin passthru ...passed 00:30:02.447 Test: blockdev copy ...passed 00:30:02.447 Suite: bdevio tests on: crypto_ram2 00:30:02.447 Test: blockdev write read block ...passed 00:30:02.447 Test: blockdev write zeroes read block ...passed 00:30:02.447 Test: blockdev write zeroes read no split ...passed 00:30:02.447 Test: blockdev write zeroes read split ...passed 00:30:02.447 Test: blockdev write zeroes read split partial ...passed 00:30:02.447 Test: blockdev reset ...passed 00:30:02.447 Test: blockdev write read 8 blocks ...passed 00:30:02.447 Test: blockdev write read size > 128k ...passed 00:30:02.447 Test: blockdev write read invalid size ...passed 00:30:02.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.447 Test: blockdev write read max offset ...passed 00:30:02.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:02.447 Test: blockdev writev readv 8 blocks ...passed 00:30:02.447 Test: blockdev writev readv 30 x 1block ...passed 00:30:02.447 Test: blockdev writev readv block ...passed 00:30:02.447 Test: blockdev writev readv size > 128k ...passed 00:30:02.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:02.447 Test: blockdev comparev and writev ...passed 00:30:02.447 Test: blockdev nvme passthru rw ...passed 00:30:02.447 Test: blockdev nvme passthru vendor specific ...passed 00:30:02.447 Test: blockdev nvme admin passthru ...passed 00:30:02.447 Test: blockdev copy ...passed 00:30:02.447 Suite: bdevio tests on: crypto_ram 00:30:02.447 Test: blockdev write read block ...passed 00:30:02.447 Test: blockdev write zeroes read block ...passed 00:30:02.447 Test: blockdev write zeroes read no split ...passed 00:30:02.706 Test: blockdev write zeroes read split ...passed 00:30:02.706 Test: blockdev write zeroes read split partial ...passed 00:30:02.707 Test: blockdev reset ...passed 00:30:02.707 Test: blockdev write read 8 blocks ...passed 00:30:02.707 Test: blockdev write read size > 128k ...passed 00:30:02.707 Test: blockdev write read invalid size ...passed 00:30:02.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.707 Test: blockdev write read max offset ...passed 00:30:02.707 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:02.707 Test: blockdev writev readv 8 blocks ...passed 00:30:02.707 Test: blockdev writev readv 30 x 1block ...passed 00:30:02.707 Test: blockdev writev readv block ...passed 00:30:02.707 Test: blockdev writev readv size > 128k ...passed 00:30:02.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:02.707 Test: blockdev comparev and writev ...passed 00:30:02.707 Test: blockdev nvme passthru rw ...passed 00:30:02.707 Test: blockdev nvme passthru vendor specific ...passed 00:30:02.707 Test: blockdev nvme admin passthru ...passed 00:30:02.707 Test: blockdev copy ...passed 00:30:02.707 00:30:02.707 Run Summary: Type Total Ran Passed Failed Inactive 00:30:02.707 suites 4 4 n/a 0 0 00:30:02.707 tests 92 92 92 0 0 00:30:02.707 asserts 520 520 520 0 n/a 00:30:02.707 00:30:02.707 Elapsed time = 0.509 seconds 00:30:02.707 0 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 116535 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 116535 ']' 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 116535 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116535 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116535' 00:30:02.707 killing process with pid 116535 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 116535 00:30:02.707 12:11:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 116535 00:30:02.966 12:11:49 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:30:02.966 00:30:02.966 real 0m3.468s 00:30:02.966 user 0m9.693s 00:30:02.966 sys 0m0.516s 00:30:02.966 12:11:49 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:02.966 12:11:49 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:02.966 ************************************ 00:30:02.966 END TEST bdev_bounds 00:30:02.966 ************************************ 00:30:02.966 12:11:49 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:30:02.966 12:11:49 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:02.966 12:11:49 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:02.966 12:11:49 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:03.225 ************************************ 00:30:03.225 START TEST bdev_nbd 00:30:03.225 ************************************ 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=117094 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 117094 /var/tmp/spdk-nbd.sock 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 117094 ']' 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:03.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:03.225 12:11:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:03.225 [2024-07-25 12:11:49.145673] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:30:03.225 [2024-07-25 12:11:49.145715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.225 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:03.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.225 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:03.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.225 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:03.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.225 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:03.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.225 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:03.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:03.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.226 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:03.226 [2024-07-25 12:11:49.264707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.484 [2024-07-25 12:11:49.354815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.484 [2024-07-25 12:11:49.376035] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:03.484 [2024-07-25 12:11:49.384056] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:03.484 [2024-07-25 12:11:49.392075] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:03.484 [2024-07-25 12:11:49.495026] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:06.087 [2024-07-25 12:11:51.662446] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:06.087 [2024-07-25 12:11:51.662500] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:06.087 [2024-07-25 12:11:51.662514] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:06.088 [2024-07-25 12:11:51.670468] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:06.088 [2024-07-25 12:11:51.670485] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:06.088 [2024-07-25 12:11:51.670496] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:06.088 [2024-07-25 12:11:51.678488] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:06.088 [2024-07-25 12:11:51.678505] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:06.088 [2024-07-25 12:11:51.678515] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:06.088 [2024-07-25 12:11:51.686508] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:06.088 [2024-07-25 12:11:51.686524] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:06.088 [2024-07-25 12:11:51.686535] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:06.088 12:11:51 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:06.088 1+0 records in 00:30:06.088 1+0 records out 00:30:06.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030195 s, 13.6 MB/s 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:06.088 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:06.346 1+0 records in 00:30:06.346 1+0 records out 00:30:06.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238569 s, 17.2 MB/s 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:06.346 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:06.605 1+0 records in 00:30:06.605 1+0 records out 00:30:06.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305747 s, 13.4 MB/s 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:06.605 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:06.865 1+0 records in 00:30:06.865 1+0 records out 00:30:06.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322007 s, 12.7 MB/s 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:06.865 12:11:52 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd0", 00:30:07.124 "bdev_name": "crypto_ram" 00:30:07.124 }, 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd1", 00:30:07.124 "bdev_name": "crypto_ram2" 00:30:07.124 }, 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd2", 00:30:07.124 "bdev_name": "crypto_ram3" 00:30:07.124 }, 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd3", 00:30:07.124 "bdev_name": "crypto_ram4" 00:30:07.124 } 00:30:07.124 ]' 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd0", 00:30:07.124 "bdev_name": "crypto_ram" 00:30:07.124 }, 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd1", 00:30:07.124 "bdev_name": "crypto_ram2" 00:30:07.124 }, 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd2", 00:30:07.124 "bdev_name": "crypto_ram3" 00:30:07.124 }, 00:30:07.124 { 00:30:07.124 "nbd_device": "/dev/nbd3", 00:30:07.124 "bdev_name": "crypto_ram4" 00:30:07.124 } 00:30:07.124 ]' 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:07.124 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:07.382 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:07.640 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:07.898 12:11:53 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.157 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:08.724 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:30:08.982 /dev/nbd0 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:08.983 1+0 records in 00:30:08.983 1+0 records out 00:30:08.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207822 s, 19.7 MB/s 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:08.983 12:11:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:30:09.241 /dev/nbd1 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:09.241 1+0 records in 00:30:09.241 1+0 records out 00:30:09.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302255 s, 13.6 MB/s 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:09.241 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:30:09.500 /dev/nbd10 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:09.500 1+0 records in 00:30:09.500 1+0 records out 00:30:09.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346547 s, 11.8 MB/s 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:09.500 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:30:09.757 /dev/nbd11 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:09.757 1+0 records in 00:30:09.757 1+0 records out 00:30:09.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301488 s, 13.6 MB/s 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:09.757 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:09.758 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:10.015 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd0", 00:30:10.015 "bdev_name": "crypto_ram" 00:30:10.015 }, 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd1", 00:30:10.015 "bdev_name": "crypto_ram2" 00:30:10.015 }, 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd10", 00:30:10.015 "bdev_name": "crypto_ram3" 00:30:10.015 }, 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd11", 00:30:10.015 "bdev_name": "crypto_ram4" 00:30:10.015 } 00:30:10.015 ]' 00:30:10.015 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd0", 00:30:10.015 "bdev_name": "crypto_ram" 00:30:10.015 }, 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd1", 00:30:10.015 "bdev_name": "crypto_ram2" 00:30:10.015 }, 00:30:10.015 { 00:30:10.015 "nbd_device": "/dev/nbd10", 00:30:10.015 "bdev_name": "crypto_ram3" 00:30:10.015 }, 00:30:10.015 { 00:30:10.016 "nbd_device": "/dev/nbd11", 00:30:10.016 "bdev_name": "crypto_ram4" 00:30:10.016 } 00:30:10.016 ]' 00:30:10.016 12:11:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:10.016 /dev/nbd1 00:30:10.016 /dev/nbd10 00:30:10.016 /dev/nbd11' 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:10.016 /dev/nbd1 00:30:10.016 /dev/nbd10 00:30:10.016 /dev/nbd11' 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:10.016 256+0 records in 00:30:10.016 256+0 records out 00:30:10.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011386 s, 92.1 MB/s 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:10.016 256+0 records in 00:30:10.016 256+0 records out 00:30:10.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0563826 s, 18.6 MB/s 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:10.016 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:10.274 256+0 records in 00:30:10.274 256+0 records out 00:30:10.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0480554 s, 21.8 MB/s 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:30:10.274 256+0 records in 00:30:10.274 256+0 records out 00:30:10.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0377464 s, 27.8 MB/s 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:30:10.274 256+0 records in 00:30:10.274 256+0 records out 00:30:10.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379157 s, 27.7 MB/s 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.274 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.533 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.791 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:11.050 12:11:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.050 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:11.309 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:11.309 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:11.309 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:11.309 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:11.309 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:11.310 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:11.310 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:11.310 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:11.310 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:11.310 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.310 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:11.569 12:11:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:12.136 malloc_lvol_verify 00:30:12.136 12:11:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:12.395 b3276fc9-b6ad-4c24-8109-96dcdbfc19ec 00:30:12.395 12:11:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:12.654 83c942af-4890-4181-98f2-84074b682244 00:30:12.913 12:11:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:12.913 /dev/nbd0 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:12.913 mke2fs 1.46.5 (30-Dec-2021) 00:30:12.913 Discarding device blocks: 0/4096 done 00:30:12.913 Creating filesystem with 4096 1k blocks and 1024 inodes 00:30:12.913 00:30:12.913 Allocating group tables: 0/1 done 00:30:12.913 Writing inode tables: 0/1 done 00:30:12.913 Creating journal (1024 blocks): done 00:30:12.913 Writing superblocks and filesystem accounting information: 0/1 done 00:30:12.913 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:12.913 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 117094 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 117094 ']' 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 117094 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.172 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117094 00:30:13.431 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:13.431 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:13.431 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117094' 00:30:13.431 killing process with pid 117094 00:30:13.431 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 117094 00:30:13.431 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 117094 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:30:13.690 00:30:13.690 real 0m10.548s 00:30:13.690 user 0m14.281s 00:30:13.690 sys 0m3.905s 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:13.690 ************************************ 00:30:13.690 END TEST bdev_nbd 00:30:13.690 ************************************ 00:30:13.690 12:11:59 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:30:13.690 12:11:59 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:30:13.690 12:11:59 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.690 12:11:59 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:13.690 ************************************ 00:30:13.690 START TEST bdev_fio 00:30:13.690 ************************************ 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:30:13.690 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.690 12:11:59 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:13.950 ************************************ 00:30:13.950 START TEST bdev_fio_rw_verify 00:30:13.950 ************************************ 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:13.950 12:11:59 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:14.208 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:14.208 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:14.208 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:14.208 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:14.208 fio-3.35 00:30:14.208 Starting 4 threads 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:14.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.468 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:29.375 00:30:29.375 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=119920: Thu Jul 25 12:12:13 2024 00:30:29.375 read: IOPS=25.7k, BW=100MiB/s (105MB/s)(1003MiB/10001msec) 00:30:29.375 slat (usec): min=15, max=471, avg=50.98, stdev=32.06 00:30:29.375 clat (usec): min=12, max=1594, avg=277.59, stdev=185.48 00:30:29.376 lat (usec): min=39, max=1676, avg=328.57, stdev=204.89 00:30:29.376 clat percentiles (usec): 00:30:29.376 | 50.000th=[ 233], 99.000th=[ 963], 99.900th=[ 1172], 99.990th=[ 1303], 00:30:29.376 | 99.999th=[ 1450] 00:30:29.376 write: IOPS=28.2k, BW=110MiB/s (115MB/s)(1076MiB/9776msec); 0 zone resets 00:30:29.376 slat (usec): min=22, max=362, avg=62.34, stdev=31.45 00:30:29.376 clat (usec): min=25, max=2565, avg=338.16, stdev=217.99 00:30:29.376 lat (usec): min=60, max=2889, avg=400.50, stdev=236.57 00:30:29.376 clat percentiles (usec): 00:30:29.376 | 50.000th=[ 297], 99.000th=[ 1106], 99.900th=[ 1401], 99.990th=[ 1696], 00:30:29.376 | 99.999th=[ 2507] 00:30:29.376 bw ( KiB/s): min=87968, max=140080, per=98.27%, avg=110789.47, stdev=3991.47, samples=76 00:30:29.376 iops : min=21992, max=35020, avg=27697.37, stdev=997.87, samples=76 00:30:29.376 lat (usec) : 20=0.01%, 50=0.01%, 100=8.45%, 250=38.57%, 500=38.72% 00:30:29.376 lat (usec) : 750=9.71%, 1000=3.32% 00:30:29.376 lat (msec) : 2=1.21%, 4=0.01% 00:30:29.376 cpu : usr=99.62%, sys=0.01%, ctx=61, majf=0, minf=243 00:30:29.376 IO depths : 1=10.3%, 2=25.6%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.376 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.376 issued rwts: total=256691,275529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.376 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:29.376 00:30:29.376 Run status group 0 (all jobs): 00:30:29.376 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=1003MiB (1051MB), run=10001-10001msec 00:30:29.376 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1076MiB (1129MB), run=9776-9776msec 00:30:29.376 00:30:29.376 real 0m13.479s 00:30:29.376 user 0m52.877s 00:30:29.376 sys 0m0.479s 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:30:29.376 ************************************ 00:30:29.376 END TEST bdev_fio_rw_verify 00:30:29.376 ************************************ 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9f77bd07-dfbf-5fed-baf0-3877e8e5d825"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9f77bd07-dfbf-5fed-baf0-3877e8e5d825",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "716e1e72-9312-5774-8e98-df9e3379bdbf"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "716e1e72-9312-5774-8e98-df9e3379bdbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "87a5dbb9-238a-5173-ac31-cdb6ff866b25"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "87a5dbb9-238a-5173-ac31-cdb6ff866b25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b9a62391-7c1f-587d-a960-e4103d75ac3f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b9a62391-7c1f-587d-a960-e4103d75ac3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:30:29.376 crypto_ram2 00:30:29.376 crypto_ram3 00:30:29.376 crypto_ram4 ]] 00:30:29.376 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9f77bd07-dfbf-5fed-baf0-3877e8e5d825"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9f77bd07-dfbf-5fed-baf0-3877e8e5d825",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "716e1e72-9312-5774-8e98-df9e3379bdbf"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "716e1e72-9312-5774-8e98-df9e3379bdbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "87a5dbb9-238a-5173-ac31-cdb6ff866b25"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "87a5dbb9-238a-5173-ac31-cdb6ff866b25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b9a62391-7c1f-587d-a960-e4103d75ac3f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b9a62391-7c1f-587d-a960-e4103d75ac3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:29.377 ************************************ 00:30:29.377 START TEST bdev_fio_trim 00:30:29.377 ************************************ 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:29.377 12:12:13 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:29.377 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:29.377 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:29.377 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:29.377 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:29.377 fio-3.35 00:30:29.377 Starting 4 threads 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:29.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:29.377 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:41.585 00:30:41.585 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=122605: Thu Jul 25 12:12:26 2024 00:30:41.585 write: IOPS=42.5k, BW=166MiB/s (174MB/s)(1662MiB/10001msec); 0 zone resets 00:30:41.585 slat (usec): min=16, max=1314, avg=53.10, stdev=31.27 00:30:41.585 clat (usec): min=29, max=1813, avg=240.28, stdev=155.16 00:30:41.586 lat (usec): min=54, max=1899, avg=293.38, stdev=175.38 00:30:41.586 clat percentiles (usec): 00:30:41.586 | 50.000th=[ 198], 99.000th=[ 791], 99.900th=[ 947], 99.990th=[ 1057], 00:30:41.586 | 99.999th=[ 1483] 00:30:41.586 bw ( KiB/s): min=150416, max=217232, per=100.00%, avg=171236.21, stdev=7169.45, samples=76 00:30:41.586 iops : min=37604, max=54308, avg=42809.05, stdev=1792.36, samples=76 00:30:41.586 trim: IOPS=42.5k, BW=166MiB/s (174MB/s)(1662MiB/10001msec); 0 zone resets 00:30:41.586 slat (usec): min=6, max=374, avg=14.67, stdev= 5.98 00:30:41.586 clat (usec): min=54, max=1676, avg=226.37, stdev=102.94 00:30:41.586 lat (usec): min=61, max=1690, avg=241.04, stdev=104.98 00:30:41.586 clat percentiles (usec): 00:30:41.586 | 50.000th=[ 210], 99.000th=[ 553], 99.900th=[ 660], 99.990th=[ 734], 00:30:41.586 | 99.999th=[ 1020] 00:30:41.586 bw ( KiB/s): min=150432, max=217256, per=100.00%, avg=171237.89, stdev=7169.85, samples=76 00:30:41.586 iops : min=37608, max=54314, avg=42809.47, stdev=1792.46, samples=76 00:30:41.586 lat (usec) : 50=0.01%, 100=9.82%, 250=56.26%, 500=29.23%, 750=3.95% 00:30:41.586 lat (usec) : 1000=0.72% 00:30:41.586 lat (msec) : 2=0.02% 00:30:41.586 cpu : usr=99.64%, sys=0.00%, ctx=85, majf=0, minf=103 00:30:41.586 IO depths : 1=8.0%, 2=26.3%, 4=52.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:41.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.586 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.586 issued rwts: total=0,425414,425415,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.586 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:41.586 00:30:41.586 Run status group 0 (all jobs): 00:30:41.586 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=1662MiB (1742MB), run=10001-10001msec 00:30:41.586 TRIM: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=1662MiB (1742MB), run=10001-10001msec 00:30:41.586 00:30:41.586 real 0m13.468s 00:30:41.586 user 0m54.172s 00:30:41.586 sys 0m0.465s 00:30:41.586 12:12:26 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.586 12:12:26 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:30:41.586 ************************************ 00:30:41.586 END TEST bdev_fio_trim 00:30:41.586 ************************************ 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:30:41.586 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:41.586 00:30:41.586 real 0m27.299s 00:30:41.586 user 1m47.224s 00:30:41.586 sys 0m1.142s 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:41.586 ************************************ 00:30:41.586 END TEST bdev_fio 00:30:41.586 ************************************ 00:30:41.586 12:12:27 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:41.586 12:12:27 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:41.586 12:12:27 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:30:41.586 12:12:27 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.586 12:12:27 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:41.586 ************************************ 00:30:41.586 START TEST bdev_verify 00:30:41.586 ************************************ 00:30:41.586 12:12:27 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:41.586 [2024-07-25 12:12:27.171976] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:30:41.586 [2024-07-25 12:12:27.172030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124459 ] 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:41.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.586 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:41.586 [2024-07-25 12:12:27.306582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:41.586 [2024-07-25 12:12:27.395825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.586 [2024-07-25 12:12:27.395831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.586 [2024-07-25 12:12:27.417136] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:41.586 [2024-07-25 12:12:27.425182] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:41.586 [2024-07-25 12:12:27.433190] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:41.586 [2024-07-25 12:12:27.530462] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:44.121 [2024-07-25 12:12:29.700387] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:44.121 [2024-07-25 12:12:29.700457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:44.121 [2024-07-25 12:12:29.700471] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:44.121 [2024-07-25 12:12:29.708400] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:44.121 [2024-07-25 12:12:29.708417] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:44.121 [2024-07-25 12:12:29.708428] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:44.121 [2024-07-25 12:12:29.716422] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:44.121 [2024-07-25 12:12:29.716438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:44.121 [2024-07-25 12:12:29.716449] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:44.121 [2024-07-25 12:12:29.724445] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:44.121 [2024-07-25 12:12:29.724461] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:44.121 [2024-07-25 12:12:29.724471] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:44.121 Running I/O for 5 seconds... 00:30:49.391 00:30:49.391 Latency(us) 00:30:49.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.391 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.391 Verification LBA range: start 0x0 length 0x1000 00:30:49.391 crypto_ram : 5.06 531.05 2.07 0.00 0.00 240495.07 11848.91 175321.91 00:30:49.391 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:49.391 Verification LBA range: start 0x1000 length 0x1000 00:30:49.391 crypto_ram : 5.06 531.12 2.07 0.00 0.00 240312.44 13631.49 175321.91 00:30:49.391 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.391 Verification LBA range: start 0x0 length 0x1000 00:30:49.391 crypto_ram2 : 5.07 530.68 2.07 0.00 0.00 239943.65 11534.34 163577.86 00:30:49.391 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:49.392 Verification LBA range: start 0x1000 length 0x1000 00:30:49.392 crypto_ram2 : 5.06 530.95 2.07 0.00 0.00 239758.04 11534.34 163577.86 00:30:49.392 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.392 Verification LBA range: start 0x0 length 0x1000 00:30:49.392 crypto_ram3 : 5.05 4156.62 16.24 0.00 0.00 30540.41 4613.73 28730.98 00:30:49.392 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:49.392 Verification LBA range: start 0x1000 length 0x1000 00:30:49.392 crypto_ram3 : 5.05 4180.32 16.33 0.00 0.00 30368.17 3486.52 28730.98 00:30:49.392 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.392 Verification LBA range: start 0x0 length 0x1000 00:30:49.392 crypto_ram4 : 5.05 4154.24 16.23 0.00 0.00 30452.90 5505.02 25060.97 00:30:49.392 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:49.392 Verification LBA range: start 0x1000 length 0x1000 00:30:49.392 crypto_ram4 : 5.05 4178.85 16.32 0.00 0.00 30272.44 3774.87 24956.11 00:30:49.392 =================================================================================================================== 00:30:49.392 Total : 18793.82 73.41 0.00 0.00 54151.75 3486.52 175321.91 00:30:49.392 00:30:49.392 real 0m8.121s 00:30:49.392 user 0m15.446s 00:30:49.392 sys 0m0.339s 00:30:49.392 12:12:35 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.392 12:12:35 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:49.392 ************************************ 00:30:49.392 END TEST bdev_verify 00:30:49.392 ************************************ 00:30:49.392 12:12:35 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:49.392 12:12:35 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:30:49.392 12:12:35 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.392 12:12:35 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:49.392 ************************************ 00:30:49.392 START TEST bdev_verify_big_io 00:30:49.392 ************************************ 00:30:49.392 12:12:35 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:49.392 [2024-07-25 12:12:35.373203] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:30:49.392 [2024-07-25 12:12:35.373259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125790 ] 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:49.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:49.392 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:49.392 [2024-07-25 12:12:35.503682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:49.651 [2024-07-25 12:12:35.588013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.651 [2024-07-25 12:12:35.588018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.651 [2024-07-25 12:12:35.609413] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:49.651 [2024-07-25 12:12:35.617439] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:49.651 [2024-07-25 12:12:35.625464] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:49.651 [2024-07-25 12:12:35.725525] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:52.185 [2024-07-25 12:12:37.900267] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:52.185 [2024-07-25 12:12:37.900335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:52.185 [2024-07-25 12:12:37.900349] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:52.185 [2024-07-25 12:12:37.908284] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:52.185 [2024-07-25 12:12:37.908304] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:52.185 [2024-07-25 12:12:37.908315] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:52.185 [2024-07-25 12:12:37.916307] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:52.186 [2024-07-25 12:12:37.916324] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:52.186 [2024-07-25 12:12:37.916334] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:52.186 [2024-07-25 12:12:37.924329] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:52.186 [2024-07-25 12:12:37.924346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:52.186 [2024-07-25 12:12:37.924356] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:52.186 Running I/O for 5 seconds... 00:30:54.723 [2024-07-25 12:12:40.429827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.431398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.431763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.432117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.434675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.436029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.437023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.438284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.440119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.440970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.441344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.441699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.444243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.444675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.723 [2024-07-25 12:12:40.446066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.447592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.449465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.449829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.450190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.451277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.453446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.454713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.455981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.457486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.458424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.458792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.459311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.460562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.462356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.463623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.465134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.466648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.467374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.467742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.469378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.470925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.473508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.475075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.476591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.477891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.478676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.479831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.481096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.482566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.484846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.486358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.487870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.488241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.489199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.490471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.491984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.493502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.496069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.497599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.498385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.498742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.500856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.502413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.504043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.505520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.508106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.509407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.509452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.509805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.511558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.512842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.512888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.514421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.515581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.517098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.517147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.518649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.519014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.519387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.519435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.519788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.520858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.522396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.522447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.523118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.523486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.525000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.525045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.526548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.527804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.529317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.529370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.530971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.531344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.532760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.532805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.533914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.534973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.535834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.535891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.536253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.536751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.538107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.538158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.539537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.542092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.542152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.543364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.543407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.544241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.544292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.724 [2024-07-25 12:12:40.545542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.545588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.548127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.548185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.548585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.548627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.549386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.549441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.550816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.550861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.553338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.553393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.553747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.553788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.555293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.555346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.556721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.556767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.558601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.558655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.559009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.559049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.561006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.561065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.562394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.562437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.563817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.563872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.564234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.564274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.565735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.565789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.566317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.566372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.567800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.567855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.568353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.568396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.570248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.570303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.570947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.570989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.572582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.572635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.573239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.573286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.574785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.574844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.576037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.576083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.577755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.577817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.578185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.578242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.579050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.579102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.579468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.579516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.581212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.581268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.581630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.581673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.582463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.582517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.582872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.582922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.584539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.584592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.584954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.584999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.585778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.585845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.586205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.586258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.587840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.587895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.588263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.588315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.589065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.589116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.589486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.589527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.591117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.591179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.591534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.591579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.592307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.592362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.592728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.592768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.594345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.594399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.725 [2024-07-25 12:12:40.594757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.594802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.595525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.595577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.595929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.595968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.597542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.597597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.597953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.597998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.598725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.598779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.599133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.599182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.600799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.600854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.601222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.601270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.601987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.602039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.602401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.602442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.604032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.604085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.604451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.604499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.605236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.605289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.605644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.605684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.607266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.607320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.607676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.607721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.608448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.608500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.608854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.608894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.610478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.610530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.610887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.610933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.610951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.611234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.611686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.611735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.612094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.612133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.612157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.612493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.613458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.613824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.613866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.614234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.614528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.614669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.615030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.615072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.615450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.615767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.616711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.616761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.616799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.616838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.617265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.617410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.617452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.617497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.617535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.617772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.618740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.618787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.618825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.618862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.619101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.619247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.619290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.619334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.619371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.619724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.620920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.620976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.621895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.622754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.622802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.622840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.726 [2024-07-25 12:12:40.622877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.623284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.623420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.623463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.623504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.623543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.623943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.738826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.738898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.740396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.740438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.743342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.743398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.744737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.744786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.746464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.746517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.747928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.747970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.749875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.749928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.750286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.750326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.751907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.751959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.753233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.753277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.755497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.755550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.756833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.756876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.757806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.757858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.758220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.758260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.760739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.760792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.761901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.763348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.765007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.765059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.766563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.766960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.769559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.769614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.771117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.772204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.772257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.773518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.727 [2024-07-25 12:12:40.774497] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.774880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.775252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.776833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.777223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.778645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.780247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.781163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.783670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.784041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.785198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.786453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.788376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.789411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.790991] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.792463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.793893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.794430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.795703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.796997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.798819] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.799954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.801230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.802529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.804053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.805634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.807266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.808901] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.809998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.811286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.812583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.814105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.816955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.818452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.818511] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.820134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.821267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.822536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.822587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.823879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.825009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.825390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.825442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.727 [2024-07-25 12:12:40.826692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.827061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.828593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.828646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.829711] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.830819] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.832352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.832405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.832858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.833294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.834236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.834289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.835559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.836631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.838192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.838251] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.839879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.728 [2024-07-25 12:12:40.840253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.841746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.841799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.842164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.843425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.844727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.844779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.846293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.846749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.848017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.848068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.849357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.850460] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.850831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.850883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.852244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.852611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.854167] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.854218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.855395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.856483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.858001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.858053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.858658] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.859150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.859954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.860004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.861253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.862382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.863654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.863705] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.864864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.865302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.865672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.865720] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.866150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.867274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.868783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.868843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.870303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.870787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.871166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.871216] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.872607] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.873730] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.990 [2024-07-25 12:12:40.874919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.874973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.875444] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.875883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.876432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.876487] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.877655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.878713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.880060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.880430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.880483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.881037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.882422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.883639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.883690] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.885559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.885929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.885981] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.886848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.888754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.889708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.889760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.890689] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.892875] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.892936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.893870] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.894680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.895953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.896013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.896565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.896929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.898050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.898481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.899947] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.900004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.900378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.900750] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.901114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.901169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.902807] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.903189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.903250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.903621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.904479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.904858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.904918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.905296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.907519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.907578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.907941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.908323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.909206] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.909267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.909631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.909999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.911295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.911669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.912031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.912077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.912528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.912904] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.913277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.913353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.914960] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.915338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.915389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.915754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.916490] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.916860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.916911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.917281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.918741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.918803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.919174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.919544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.920368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.920429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.920793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.921163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.922502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.922873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.923243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.923292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.923720] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.991 [2024-07-25 12:12:40.924095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.924469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.924520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.926204] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.926576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.926628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.926993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.927741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.928111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.928169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.928538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.930102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:30:54.992 [2024-07-25 12:12:40.930561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.930921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.930963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.931757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.931785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.932150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.932509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.932552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.932879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.934113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.934486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.934533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.934888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.935677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.936041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.936085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.936452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.936794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.938035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.938410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.938465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.938826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.939333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.939696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.939738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.940094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.940442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.941522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.942439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.942484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.943631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.944010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.945357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.945413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.946735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.947088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.948624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.950039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.950096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.951547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.952049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.953212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.953259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.954267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.954513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.955329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.955698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.955740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.955780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.956151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.957713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.957773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.957813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.958121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.958920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.959295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.959338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.959693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.960056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.961218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.961265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.961741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.961985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.962845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.962894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.962932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.962970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.963388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.963435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.963475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.963517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.963759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.964604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.964652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.964690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.992 [2024-07-25 12:12:40.964730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.965091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.965166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.965217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.965260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.965500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.966803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.966853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.966890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.966928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.967357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.967402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.967441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.967478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.967758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.968581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.968629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.968679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.968719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.969083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.969144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.969183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.969221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.969460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.970309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.971804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.971860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.972222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.972645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.974164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.974218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.974579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.974823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.975756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.976753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.976797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.978023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.978491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.979809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.979854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.981188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.981507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.982310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.983334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.983382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.983918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.984287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.984657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.984707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.986280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.986524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.987505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.988433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.988480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.989791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.990266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.991271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.991316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.992310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.992596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.993381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.994492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.994536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.996159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.996524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.997991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.998044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.999518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:40.999796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.000588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.001073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.001116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.002623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.003045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.004332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.004378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.005904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.006252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.007039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.008553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.008598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.009809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.010182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.010567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.010612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.011875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.012233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.013019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.014583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.014638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.016035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.993 [2024-07-25 12:12:41.016415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.017700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.017746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.019017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.019270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.020104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.020866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.020912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.021744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.022115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.023370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.023414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.024685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.024931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.025715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.027125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.027180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.028685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.029051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.030189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.030236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.030681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.030927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.031823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.031875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.033317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.033370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.033728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.033785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.035218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.035262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.035505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.036364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.037887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.037933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.037970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.038437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.039404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.039449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.039489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.039734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.041696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.041749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.041787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.043075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.044337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.044396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.044434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.045728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.045972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.046753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.046809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.048010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.048054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.048548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.048599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.050015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.050064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.050401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.053966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.054917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.054964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.055002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.055422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.056726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.056772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.056809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.057050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.058682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.058735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.058772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.059386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.060951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.061004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.061043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.062319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.062564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.063388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.063438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.064989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.065041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.065406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.065453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.066655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.066699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.066941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.068529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.070102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.070160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.070216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.070579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.994 [2024-07-25 12:12:41.071913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.071957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.071994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.072243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.077297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.077352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.077391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.078400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.079621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.079674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.079712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.080971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.081270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.084868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.084917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.086382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.086432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.086802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.086850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.087218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.087259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.087678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.091132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.091781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.091825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.091862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.092287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.093597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.093642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.093679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.093921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.096431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.096489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.096527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.097833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.098206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.098255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.098294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.099179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.099424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.103071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.103122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.104079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.104121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.104491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.104536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.104896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.104937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:54.995 [2024-07-25 12:12:41.105188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.110408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.110464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.111960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.112008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.113862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.113913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.115362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.115412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.115859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.121429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.121482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.123096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.123147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.124719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.124771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.126040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.126083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.126335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.128748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.128802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.130062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.130106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.131971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.132023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.133223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.133265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.133508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.138605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.138659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.139418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.140402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.142001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.142056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.143329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.144835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.145085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.149291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.149346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.150674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.152494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.152557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.153159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.153351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.157136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.158422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.159700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.161221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.161583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.163156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.163539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.164946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.165329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.169559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.170834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.172174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.173693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.175270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.175733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.177302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.177681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.177928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.183110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.184674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.186167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.187790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.189033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.190279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.190800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.192062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.192378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.194259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.195734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.197080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.198358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.199556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.200648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.201625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.202401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.202649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.204914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.206194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.206238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.207667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.209323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.210838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.210884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.211725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.211976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.213962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.215258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.215303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.216806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.217193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.218758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.218810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.220436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.220683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.224341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.225036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.225079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.226349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.226776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.228084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.228130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.229632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.229965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.233544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.234497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.234544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.235175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.235545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.235912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.235955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.258 [2024-07-25 12:12:41.237565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.237867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.240526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.242047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.242092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.242819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.243197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.244060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.244104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.245039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.245295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.248611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.249168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.249214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.250796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.251356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.252770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.252824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.254457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.254854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.257647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.258553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.258599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.259279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.259649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.260591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.260639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.261431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.261680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.263907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.265508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.265553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.266986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.267364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.268158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.268204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.269124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.269378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.272570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.272992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.273034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.274285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.274725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.275419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.275473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.277107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.277364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.279314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.279725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.279771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.280939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.281360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.282587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.282637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.283810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.284176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.287043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.288180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.289507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.289553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.290022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.290808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.292252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.292294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.292618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.297940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.298333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.298382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.299612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.301101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.301475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.301529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.301892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.302245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.306098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.306161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.306646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.307915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.309490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.309546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.309902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.310278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.310607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.314290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.314666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.315027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.315071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.315444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.316196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.317194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.317237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.317544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.321670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.322040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.322093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.322458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.324418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.325010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.325056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.326057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.326321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.330869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.330923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.331289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.331652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.333657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.333713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.334394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.335463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.335741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.336777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.338131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.338825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.338868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.339261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.339626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.339991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.340038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.340384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.343842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.344337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.344385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.345468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.346820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.347195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.347255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.347617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.347945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.351515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.351579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.352158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.353320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.354763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.354815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.355182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.355546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.355883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.359211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.359588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.360159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.360211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.360583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.361647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.362329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.362372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.362731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.259 [2024-07-25 12:12:41.365995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.366373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.366427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.366786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.368254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.369310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.369356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.369893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.260 [2024-07-25 12:12:41.370251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.522 [2024-07-25 12:12:41.373575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.522 [2024-07-25 12:12:41.373950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.374323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.375033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.376123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.377077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.377446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.377814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.378153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.380652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.381026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.381957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.382004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.382489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.383978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.384352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.384399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.384799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.386988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.387042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.387891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.387938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.389214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.389269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.390045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.390089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.390343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.393436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.393492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.393955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.394000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.395873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.395928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.397327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.397369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.397776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.402549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.402617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.404149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.404199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.405318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.405372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.406199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.406243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.406488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.409880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.409939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.411102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.411155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.413099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.413160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.413855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.413898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.414168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.416798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.416847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.418014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.418060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.418437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.418489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.420047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.420090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.420455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.424946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.425002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.425041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.425085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.426581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.426634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.426675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.426713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.426967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.429817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.430060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.432915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.432966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.433801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.437522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.437572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.437610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.437648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.523 [2024-07-25 12:12:41.438057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.438102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.438147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.438185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.438435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.440695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.440746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.441924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.441966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.442343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.442390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.443856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.443908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.444170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.447733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.447789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.448751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.448799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.450486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.450540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.451208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.451253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.451500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.454937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.454994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.456502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.456545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.458130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.458189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.459485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.459528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.459807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.464617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.464684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.465738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.465781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.467448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.467501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.469006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.469049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.469425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.473721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.473775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.474518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.474570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.475393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.475445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.477051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.477101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.477362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.481556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.481618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.483130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.483181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.484807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.484860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.485229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.485273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.485518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.489610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.489666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.490502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.490546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.492217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.492269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.493779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.493822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.494162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.499386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.499440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.500724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.500767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.502344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.502395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.503910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.503964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.504214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.507663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.507718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.508599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.508641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.510203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.510257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.511530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.511575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.511824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.516301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.516362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.517398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.517442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.519201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.519254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.519613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.519659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.524 [2024-07-25 12:12:41.519909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.525051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.525106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.525156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.526402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.527957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.528010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.528055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.529482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.529844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.533726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.533786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.535288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.535330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.535696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.535742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.537337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.537381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.537630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.540481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.541488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.541533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.541571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.542112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.543590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.543640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.543678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.543927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.548078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.548134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.548178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.549678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.551418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.551476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.551516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.552070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.552324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.555014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.555074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.556444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.556486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.556850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.556899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.558283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.558327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.558586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.562298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.563150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.563197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.563236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.563597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.564862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.564909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.564950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.565209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.570506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.570565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.570602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.571756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.572526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.572592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.572635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.573961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.574301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.577876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.577933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.578970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.579013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.579453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.579499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.580783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.580827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.581078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.583295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.584205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.584251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.584288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.584724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.586019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.586065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.586102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.586354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.590911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.590967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.591006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.592412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.525 [2024-07-25 12:12:41.594422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.594482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.594522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.594998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.595252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.599283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.599332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.600587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.600630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.601042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.601086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.602592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.602636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.602956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.606824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.607070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.612219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.612273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.612311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.613279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.613967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.614021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.614060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.615233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.615542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.619126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.620156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.620201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.621453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.621863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.623356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.623401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.624151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.624403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.626385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.627678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.627724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.629219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.629608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.631186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.631242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.632827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.633073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.526 [2024-07-25 12:12:41.636641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.637362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.637409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.638698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.639103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.640387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.640432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.641934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.642262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.645851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.646946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.646991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.647462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.647833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.648210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.648255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.649740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.649991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.829 [2024-07-25 12:12:41.654091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.655716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.655765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.657099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.658048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.659467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.659516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.659876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.660135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.663816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.665093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.666465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.667972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.668353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.669692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.670098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.671688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.672119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.676337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.677765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.679292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.680915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.682173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.682967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.684174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.684717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.684969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.689906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.691227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.692507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.694014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.695631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.696492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.697388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.698293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.698580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.702547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.703817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.705224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.705588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.706294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.707553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.708718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.709424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.709696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.715010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.715388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.715438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.716830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.717534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.718881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.718934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.720282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.720533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.722779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.723936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.723984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.724792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.725240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.726396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.726444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.727191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.727455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.729277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.730854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.730909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.731462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.731841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.733198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.733242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.734740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.735160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.739269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.739859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.739905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.741233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.741614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.742455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.742503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.743243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.743498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.746010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.747389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.747434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.748530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.748962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.750288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.750333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.751187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.751458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.830 [2024-07-25 12:12:41.752979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.753534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.753581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.754606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.755001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.755766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.755810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.757019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.757378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.760843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.762380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.762426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.763108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.763492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.764252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.764297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.765112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.765409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.767909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.769431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.769479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.770728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.771228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.772851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.772902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.773264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.773580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.775308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.775678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.775720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.776082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.776542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.778179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.778231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.778763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.779014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.780609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.780980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.781026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.782397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.782893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.783282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.783341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.783702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.783955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.785607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.785994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.787634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.787687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.788204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.789343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.789703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.789755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.790082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.791910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.792290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.792338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.792701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.793399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.794937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.794983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.795346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.795711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.797793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.797847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.798214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.798576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.799392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.799447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.799805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.800174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.800505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.802209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.802587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.802945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.802996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.803480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.803850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.804227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.804281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.804697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.806583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.806957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.807013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.807385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.808106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.808481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.808547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.808903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.809272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.811287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.831 [2024-07-25 12:12:41.811342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.811699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.812059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.812831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.812887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.813253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.813616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.813943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.815570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.815938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.816309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.816358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.816842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.817218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.817580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.817625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.817984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.820001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.820379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.820429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.820786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.822253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.823191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.823238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.824179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.824431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.827829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.827883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.828905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.829618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.831011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.831064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.831531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.831891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.832197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.834791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.836373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.837383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.837428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.837928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.839414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.839780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.839823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.840075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.844369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.845623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.845669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.846602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.847690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.848641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.848688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.849977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.850232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.852960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.854144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.855276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.856305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.857791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.858180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.858537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.859944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.860235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.863110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.863484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.864075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.864117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.864534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.865718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.866975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.867019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.867270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.870962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.871024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.872151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.872204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.873816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.873870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.874266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.874308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.874605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.878028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.878081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.878891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.878937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.880321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.880393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.881626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.881672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.881961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.832 [2024-07-25 12:12:41.885415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.885468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.886819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.886861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.888594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.888654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.890200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.890250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.890495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.894274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.894327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.895608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.895650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.897110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.897169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.898555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.898599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.898845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.901696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.901745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.903120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.903168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.903558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.903611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.905153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.905204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.905448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.909872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.909925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.909976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.910015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.910832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.910884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.910923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.910962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.911257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.914945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.914995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.915832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.916799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.916856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.916898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.916936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.917423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.917468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.917506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.917544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.917859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.918722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.918770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.918809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.918846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.919265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.919309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.919348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.919385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.919681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.920573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.920621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.921440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.921481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.921917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.921962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.922970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.923014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.923303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.925315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.925376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.926817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.926862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.928785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.928845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.930394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.930436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.930843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.933088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.933148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.934546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.934588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.935775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.935827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.937086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.937129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.937457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.939800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.939853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.940611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.833 [2024-07-25 12:12:41.940654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.834 [2024-07-25 12:12:41.941917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.834 [2024-07-25 12:12:41.941969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.834 [2024-07-25 12:12:41.943239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.834 [2024-07-25 12:12:41.943282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:55.834 [2024-07-25 12:12:41.943571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.946022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.946078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.947538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.947585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.949529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.949583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.951115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.951171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.951583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.954097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.954158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.955458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.955501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.957381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.957434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.958851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.958894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.959164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.960788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.960841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.961200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.961244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.962832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.962884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.964165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.964207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.964493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.966936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.966990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.095 [2024-07-25 12:12:41.968507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.968557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.969248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.969299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.969663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.969702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.969950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.972117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.972175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.972987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.973030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.974697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.974749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.976041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.976084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.976401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.978806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.978859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.980143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.980186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.981953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.982004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.983523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.983573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.983822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.985324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.985386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.985430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.985782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.987358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.987410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.987451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.988736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.989039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.989954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.990003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.991285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.991328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.991721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.991766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.992701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.992745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.993066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.993994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.995278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.995324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.995361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.995786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.996813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.996861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.996900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.997153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.999238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.999291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.999335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:41.999693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.001455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.001506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.001551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.003101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.003376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.006895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.006948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.008280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.008324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.008691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.008751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.009108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.009157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.009492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.013178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.014145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.014189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.014227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.014635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.015925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.015970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.016007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.016319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.017915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.017967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.018005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.019274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.020932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.020985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.021022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.096 [2024-07-25 12:12:42.021860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.022109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.022963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.023019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.024644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.024690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.025178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.025240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.025597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.025641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.025885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.026748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.028155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.028204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.028250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.028620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.029906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.029952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.029989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.030280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.031479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.031529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.031567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.032542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.034213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.034264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.034302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.035577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.035871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.036735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.036789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.038333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.038384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.038753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.038805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.039169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.039215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.039650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.040509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.040556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.040606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.040658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.041023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.041073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.041114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.041159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.041407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.043490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.043543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.043581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.044863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.046216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.046266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.046307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.047689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.048075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.048925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.050200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.050246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.051528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.051938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.053216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.053261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.054545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.054791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.055653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.056020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.056061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.057613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.058068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.058679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.058726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.060041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.060301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.061379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.062123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.062177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.063100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.063482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.064649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.064694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.065625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.065905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.066799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.068357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.068400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.068876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.069256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.070802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.070855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.071725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.097 [2024-07-25 12:12:42.072011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.073203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.074061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.074107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.075024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.076302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.077234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.077280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.078311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.078719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.079642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.080581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.081813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.082784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.083158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.084477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.085764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.086280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.086571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.090461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.091854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.092227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.092584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.094238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.094780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.096012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.096384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.096703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.100527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.101983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.102353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.102710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.104352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.105651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.106966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.108021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.108325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.109436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.109806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.110449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.111929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.112615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.114125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.115496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.115855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.116224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.117903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.118742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.118791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.119150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.119975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.120356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.120405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.120760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.121134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.122086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.122469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.122516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.122871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.123287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.123653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.123707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.124061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.124426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.125381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.125757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.125804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.126164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.126571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.126937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.126994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.127357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.127708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.128616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.128993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.129040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.129405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.129823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.130202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.130258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.130619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.130950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.131870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.132250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.132298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.132652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.133066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.133448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.133512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.133873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.134183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.098 [2024-07-25 12:12:42.135099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.135479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.135526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.135880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.136272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.136648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.136696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.137057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.137361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.138337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.138710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.138756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.139110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.139515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.139885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.139939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.140318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.140633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.141588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.141957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.142004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.142366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.142783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.143164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.143220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.143577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.143883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.144888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.145272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.145321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.145674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.146109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.146483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.146540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.146899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.147218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.148228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.148605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.148652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.149005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.149447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.149812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.149870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.150237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.150561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.152391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.152759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.154034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.154081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.154533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.156007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.156377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.156422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.156728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.159012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.159985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.160032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.160968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.161659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.162022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.162067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.163703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.163988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.165729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.165783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.166340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.166698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.168448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.168508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.170086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.170901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.171222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.174099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.175038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.176217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.176262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.176624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.177573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.178342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.099 [2024-07-25 12:12:42.178387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.178759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.180882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.181408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.181455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.182945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.183700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.184064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.184108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.184824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.185101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.187171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.187225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.188398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.188754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.190729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.190789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.191988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.193506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.193838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.194692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.196323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.196682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.196726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.197198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.198240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.198938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.198983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.199266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.200571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.201772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.201818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.203095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.203982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.205617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.205664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.207194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.207440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.208647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.208701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.209058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.209436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.100 [2024-07-25 12:12:42.211507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.211568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.213069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.213881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.214188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.215061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.215440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.217049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.217100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.217482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.219073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.220625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.220668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.220915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.223247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.224300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.224355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.224708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.226590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.228118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.228173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.229675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.229923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.232045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.233508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.234564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.234932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.237098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.238729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.240255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.241882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.242185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.242986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.244503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.245623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.245679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.246233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.246695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.247979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.248022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.248272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.249948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.250000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.251270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.251312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.253200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.253252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.253771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.253817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.254127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.256192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.256245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.257751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.257794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.259729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.259788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.261392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.261434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.261681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.262914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.262968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.264272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.361 [2024-07-25 12:12:42.264316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.266003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.266056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.267542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.267584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.267905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.270248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.270300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.271533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.271576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.272453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.272507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.273853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.273896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.274146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.274938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.274986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.276150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.276193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.276598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.276643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.277926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.277969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.278221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.279960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.280013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.280050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.280088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.281727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.281779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.281817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.281855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.282102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.283911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.284936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.284984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.285830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.286629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.286684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.286722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.286761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.287126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.287177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.287221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.287260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.287506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.288351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.288406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.288766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.288808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.289314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.289360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.290967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.291017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.291275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.292701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.292754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.294000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.294043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.295920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.295973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.296942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.296986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.297361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.299853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.299913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.301488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.301531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.302900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.302951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.304222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.304265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.304550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.305697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.305747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.306424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.306468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.308123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.308180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.309684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.309727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.310014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.312111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.312169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.313678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.313722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.314523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.314575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.315754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.315797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.316085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.317850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.317908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.362 [2024-07-25 12:12:42.319372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.319419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.321359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.321416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.322723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.322766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.323201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.325396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.325450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.326757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.326800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.327767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.327819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.329076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.329119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.329409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.330517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.330580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.330934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.330978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.332915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.332973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.334501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.334544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.334790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.336900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.336953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.338456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.338499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.339338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.339388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.340080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.340127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.340414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.342595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.342647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.343995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.344037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.345701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.345753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.347252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.347303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.347653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.349673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.349729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.349770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.351058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.352228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.352281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.352323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.353609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.353860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.354658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.354706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.355062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.355109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.355668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.355717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.357054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.357097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.357347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.358135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.359110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.359163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.359200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.359601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.360894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.360941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.360978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.361230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.362866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.362919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.362957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.364215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.366099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.366159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.366198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.366799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.367049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.367892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.367939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.369565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.369611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.370079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.370125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.370502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.370543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.370789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.371666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.373186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.373231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.373276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.373704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.374978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.375022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.375060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.375351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.376469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.376520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.376558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.377211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.378608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.378666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.378705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.380246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.380569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.381496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.381545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.381997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.382040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.382419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.382470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.383846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.383894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.384147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.384999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.385372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.385420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.385461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.385975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.387341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.387394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.387441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.387686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.390179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.390239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.390278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.390637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.392459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.392510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.392555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.393784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.394118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.394998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.395046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.395414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.395460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.395883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.395929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.397161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.397204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.397540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.398958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.399208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.400399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.400463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.400502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.363 [2024-07-25 12:12:42.401866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.403029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.403089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.403128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.404660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.404918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.405950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.406460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.406508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.407568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.408018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.409092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.409146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.409499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.409907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.410757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.411818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.411862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.413488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.413903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.414551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.414596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.414948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.415226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.416019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.417580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.417625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.418409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.418826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.420346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.420392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.421163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.421556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.422720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.423475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.423529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.424648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.425165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.425529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.425570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.426378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.426689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.428956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.429915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.429961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.431224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.433134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.434015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.434060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.434419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.434791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.435632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.436135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.437672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.439033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.439496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.439860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.440228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.440606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.440999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.442344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.442714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.443079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.443444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.444249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.444631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.444990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.445359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.445731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.447279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.447649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.448007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.448367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.449157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.449524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.449881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.450246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.450547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.451831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.452224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.452586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.452944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.453654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.454027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.454392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.454754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.455100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.456471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.456846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.456892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.457261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.458134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.458517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.458562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.458924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.459348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.460600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.460968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.461014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.461388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.461906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.462284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.462353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.462710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.463117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.464298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.464667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.464712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.465067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.465604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.465971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.466026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.466392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.466777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.467901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.468289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.468348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.468703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.364 [2024-07-25 12:12:42.469198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.469570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.469623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.469987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.470363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.471398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.471770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.471817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.472180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.472675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.473969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.474016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.474378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.474629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.475694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.476062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.476108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.477400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.365 [2024-07-25 12:12:42.477769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.478430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.478478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.479643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.479892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.480847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.481986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.482034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.483206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.483664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.485277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.485329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.486909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.487259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.488076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.489251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.489297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.490488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.490905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.492071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.492118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.492855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.493237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.494261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.495564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.495612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.495985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.496363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.497952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.497998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.498360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.498733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.499570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.499971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.500018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.501199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.501699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.502065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.502112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.503741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.504035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.504833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.506036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.506573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.506619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.507062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.507816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.509086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.509164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.509449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.511474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.512690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.512736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.513569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.514456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.516038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.516083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.517529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.517891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.519332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.519387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.519744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.520106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.521948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.522005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.522374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.523944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.524198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.525097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.526187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.526852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.526898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.527270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.527732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.528097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.528155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.528588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.529805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.628 [2024-07-25 12:12:42.531123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.531177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.531531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.533344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.534205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.534251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.534986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.535330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.537640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.537694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.538262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.539639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.540648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.540702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.541059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.541943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.542230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.543040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.544415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.545835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.545879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.546268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.547656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.549046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.549089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.549465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.551509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.552789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.552835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.554119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.555936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.557388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.557443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.558990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.559262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.560806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.560859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.562116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.563400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.565152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.565205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.566783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.568253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.568562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.569409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.569771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.570573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.570618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.571022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.572241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.573506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.573549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.573830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.575927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.577234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.577282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.577740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.579067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.580337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.580381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.581614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.581898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.583918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.585204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.586496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.587211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.588285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.589573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.590846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.592125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.592376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.593217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.594480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.595762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.595805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.596221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.596586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.597247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.597293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.597595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.599948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.600008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.601330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.601373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.603051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.603103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.604394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.604437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.604734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.606861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.606914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.608203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.629 [2024-07-25 12:12:42.608250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.609816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.609871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.611331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.611377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.611622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.613194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.613248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.613605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.613649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.615226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.615278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.616568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.616612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.616898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.618921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.618975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.620334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.620376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.621240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.621293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.621650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.621696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.621943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.622783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.622830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.624104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.624151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.624557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.624602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.625865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.625912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.626201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.627295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.627350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.627388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.627438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.629452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.629512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.629553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.629595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.629840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.630687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.630737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.630783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.630821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.631194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.631238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.631282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.631322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.631568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.632428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.632476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.632516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.632564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.633071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.633116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.633176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.633216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.633597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.634398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.634456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.634502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.634543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.634919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.634967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.635005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.635043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.635336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.636175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.636222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.637656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.637707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.638075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.638145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.638505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.638551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.638981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.641054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.641108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.642389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.642432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.644202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.644256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.645530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.645573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.645819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.646986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.647042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.648355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.648399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.630 [2024-07-25 12:12:42.650083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.650147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.651416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.651460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.651758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.654128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.654189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.655699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.655741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.656565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.656619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.658232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.658281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.658536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.660246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.660301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.661577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.661620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.663306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.663359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.664552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.664605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.665031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.667286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.667340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.668667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.668710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.669911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.669964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.671229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.671272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.671558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.672662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.672717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.673086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.673131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.674812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.674868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.676355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.676406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.676653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.678840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.678895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.680175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.680218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.680901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.680960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.681324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.681369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.681617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.683701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.683756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.684567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.684610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.686247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.686302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.687584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.687627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.687909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.691108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.691168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.692671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.692722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.694604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.694664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.695948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.695991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.696308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.698110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.698174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.698216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.698574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.700397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.700458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.700503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.701980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.702310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.703103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.703159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.703521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.703583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.704128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.704193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.705816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.705867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.706203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.707041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.708423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.708470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.708525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.709021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.631 [2024-07-25 12:12:42.709393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.709440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.709491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.709738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.711839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.711893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.711939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.713135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.713860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.713912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.713952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.714691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.714979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.715884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.715933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.716856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.716903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.717287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.717344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.717705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.717749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.718067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.718922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.719347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.719396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.719435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.719847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.720909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.720956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.721011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.721436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.723613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.723675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.723713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.724225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.725947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.726007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.726046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.726417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.726709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.727567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.727620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.728998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.729042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.729416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.729469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.730362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.730408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.730703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.731798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.733269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.733323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.733362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.733774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.735251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.735305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.632 [2024-07-25 12:12:42.735346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.735706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.737832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.737887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.737925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.739187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.741214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.741286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.741325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.742601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.633 [2024-07-25 12:12:42.742851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.743759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.743809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.744319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.744365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.744748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.744793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.745797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.745851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.746131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.747928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.750108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.750172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.750211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.751004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.752756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.752810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.752848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.754128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.754386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.755427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.755989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.756035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.757534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.757949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.758552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.758601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.760097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.760419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.895 [2024-07-25 12:12:42.761567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.763190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.763233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.763593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.763965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.764356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.764405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.764761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.765122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.766173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.766547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.766600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.766959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.767451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.767824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.767882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.768496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.768822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.769846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.770224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.770273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.770635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.771086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.771466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.771513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.771870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.772188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.773550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.773932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.773977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.774348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.775123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.775497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.775544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.775902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.776198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.777185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.777553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.777917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.778288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.778771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.779135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.779509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.779872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.780317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.781862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.782242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.782605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.782966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.783772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.784149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.784516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.784878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.785319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.787259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.787629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.787995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.788363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.789136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.789515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.789877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.790245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.790576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.792047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.792427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.792807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.793188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.794056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.794436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.794799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.795168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.795457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.796838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.797220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.797267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.798477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.799344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.800536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.800583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.801739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.802145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.803182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.804352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.804400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.804915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.805297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.806751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.806804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.896 [2024-07-25 12:12:42.807170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.807481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.808327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.809664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.809708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.810728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.811175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.812070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.812126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.812488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.812873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.813799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.814242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.814292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.815622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.815994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.816370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.816421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.816778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.817103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.817988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.818925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.818974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.819338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.819892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.821421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.821473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.822725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.823057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.823955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.824338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.824387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.824744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.825128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.826065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.826109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.827623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.827876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.828808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.830338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.830383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.830895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.831363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.831730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.831778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.833116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.833373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.834339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.835711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.835759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.836118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.836625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.838043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.838091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.838458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.838712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.839693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.840075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.840122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.841761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.842131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.843633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.843684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.844471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.844767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.845560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.845928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.845976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.846342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.846807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.847186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.847236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.847589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.847917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.848791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.850315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.851238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.851281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.851674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.853005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.854513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.854563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.854865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.856041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.856424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.856474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.856836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.858527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.859658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.859706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.897 [2024-07-25 12:12:42.860740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.861055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.862253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.862308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.863851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.865192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.867058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.867111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.867817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.868985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.869240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.870036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.870413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.870778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.870825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.871201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.872464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.873743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.873787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.874034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.876436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.877970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.878016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.879258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.880076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.881293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.881345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.882605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.882907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.885289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.885347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.886743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.888253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.888936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.888991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.889361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.890640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.890936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.891840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.892480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.893834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.893879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.894254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.895763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.897031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.897073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.897480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.899659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.900950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.900995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.902498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.904186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.905472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.905519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.907029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.907285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.909674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.909741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.911262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.912678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.913645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.913698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.914962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.916235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.916484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.917405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.917787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.919347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.919399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.919766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.921315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.922887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.922929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.923186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.925590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.926471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.926524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.926881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.928824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.930383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.930435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.931941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.932197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.934360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.935876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.936679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.937039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.939111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.940757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.942363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.898 [2024-07-25 12:12:42.943924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.944243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.945126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.946637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.947501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.947554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.948050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.948535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.949802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.949846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.950091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.951932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.951987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.953253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.953296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.955172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.955225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.955647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.955692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.955971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.958072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.958127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.959632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.959675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.961554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.961614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.963237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.963281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.963532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.964783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.964838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.965989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.966032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.967683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.967737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.969252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.969297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.969614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.972014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.972069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.973289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.973332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.974239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.974294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.975868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.975910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.976162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.977003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.977054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.978150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.978193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.978624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.978669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.979948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.979993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.980246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.981765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.981819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.981861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.981898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.983541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.983594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.983632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.983668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.983915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.984864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.984912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.984950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.984987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.985405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.985450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.985489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.985526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.985772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.986846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.986895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.986932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.986971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.987346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.987391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.987429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.987466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.987727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.988610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.988658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.988704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.988742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.989111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.989162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.989204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.899 [2024-07-25 12:12:42.989248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.989494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.990384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.990440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.990797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.990841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.991302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.991350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.992688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.992732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.993023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.994510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.994566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.995828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.995872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.997709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.997763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.998852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.998908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:42.999354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.001701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.001756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.003225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.003275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.004372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.004426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.005687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.005730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.006012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.007201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.007269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.007637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.007679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.009531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:56.900 [2024-07-25 12:12:43.009587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.162 [2024-07-25 12:12:43.011087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.162 [2024-07-25 12:12:43.011131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.011387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.013510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.013566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.015064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.015108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.015880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.015933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.016333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.016380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.016628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.018737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.018792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.019981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.020023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.020917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.020970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.022249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.022295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.022609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.024699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.024754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.025112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.025164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.025860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.025912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.027106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.027159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.027405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.029573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.029628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.030000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.030059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.031652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.031707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.032862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.032908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.033294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.034729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.034785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.035148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.035193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.036939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.036993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.038285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.038332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.038629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.039815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.039871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.040251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.040298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.041656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.041710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.042217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.042265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.042517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.043808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.043864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.043905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.044822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.046764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.046819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.046861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.048211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.048582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.049535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.049597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.049956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.050007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.050432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.050477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.050838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.050890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.051146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.052168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.052962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.053005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.053043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.053453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.054728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.054774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.054812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.055188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.057369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.057428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.057483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.057844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.163 [2024-07-25 12:12:43.058572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.058628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.058665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.060157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.060408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.061327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.061376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.062130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.062182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.062686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.062731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.063352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.063396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.063707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.064564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.065694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.065737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.065775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.066156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.067726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.067777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.067825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.068070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.069317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.069373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.069411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.069959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.070648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.070700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.070742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.071834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.072253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.073241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.073295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.074873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.074923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.075311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.075360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.076647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.076690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.076935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.077893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.079432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.079477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.079514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.079964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.080339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.080388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.080428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.080724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.082119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.082183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.082221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.083411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.084109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.084169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.084232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.084585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.084951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.086003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.086078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.086444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.086498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.086949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.087007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.087378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.087426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.087766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.088779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.088838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.088894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.088947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.089442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.089490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.089532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.089578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.090054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.091405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.091462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.091500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.093134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.093995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.094048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.094086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.094460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.094874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.095953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.096334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.096383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.164 [2024-07-25 12:12:43.096757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.097321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.098771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.098815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.099178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.099573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.100706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.102020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.102065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.102428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.102892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.103268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.103318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.104068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.104323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.105351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.105723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.105770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.106608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.106980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.107355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.107411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.107774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.108167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.109112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.109489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.109568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.109929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.110392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.111459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.111504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.112023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.112373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.114361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.115072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.115118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.115483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.116285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.117331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.117378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.117923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.118222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.119286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.120443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.121031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.121397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.121890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.122269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.123456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.124035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.124334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.126307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.127089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.127458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.127822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.129045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.130008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.130374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.132015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.132418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.134287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.135063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.136046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.136992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.138347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.139303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.140400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.140759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.141181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.142795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.144344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.145678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.146113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.147046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.148270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.149899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.150701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.151021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.153265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.154225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.154272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.154954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.156334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.156978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.157025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.157390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.157704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.158565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.158934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.158980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.165 [2024-07-25 12:12:43.160231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.160599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.160960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.161007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.161384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.161680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.162592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.164016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.164070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.165455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.165899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.166273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.166318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.167451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.167807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.168658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.170181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.170233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.171480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.171849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.172242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.172287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.172642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.172932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.173874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.174903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.174953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.175314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.175863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.177088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.177136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.178451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.178699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.179689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.181245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.181290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.182551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.183045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.183433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.183481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.183842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.184193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.185323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.186364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.186412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.187896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.188378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.189854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.189908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.191384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.191743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.192663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.193053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.193100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.194673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.195151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.195514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.195556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.195912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.196199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.197246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.197769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.197812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.198174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.198548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.199406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.199453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.200598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.201031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.202276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.203753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.203805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.204509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.204912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.206012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.206059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.206420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.206759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.207617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.209036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.210448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.210491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.210855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.212230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.213499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.213542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.213830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.215534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.216786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.216832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.166 [2024-07-25 12:12:43.217933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.219699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.221076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.221122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.222591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.222894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.224173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.224227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.225290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.226545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.228197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.228250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.229260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.230798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.231048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.231982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.232373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.232747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.232794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.233168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.234433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.235711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.235754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.236063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.238461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.240034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.240079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.241544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.242397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.243953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.244005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.245633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.245882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.248091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.248154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.249437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.250716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.251408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.251459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.251868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.253207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.253455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.254342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.255426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.256694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.256738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.257135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.258423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.259199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.259243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.259551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.261650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.262955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.263002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.264077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.265778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.267112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.267166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.268594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.268938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.271053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.271108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.272392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.273679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.275527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.275588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.276929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.167 [2024-07-25 12:12:43.278364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.278614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.279654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.281278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.282800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.282851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.283236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.284791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.285868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.285913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.286225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.288184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.288551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.288594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.289308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.290999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.292397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.292444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.293738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.293988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.296200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.296961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.297322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.298148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.299834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.301131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.302462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.303899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.304202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.305168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.305763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.306122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.306169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.306541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.307808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.309089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.309131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.309418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.311853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.311924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.313360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.313405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.314088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.314148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.314503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.314547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.314794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.316962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.317015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.318041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.318083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.319767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.319820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.321083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.321126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.321430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.323905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.323961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.325279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.325323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.327178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.327231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.328703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.328755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.329003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.330548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.330605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.330962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.331006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.332597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.332650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.333933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.333976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.334271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.335106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.335163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.336651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.336709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.337081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.337128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.338394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.338455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.338896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.341513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.341569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.341611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.341648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.343465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.343517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.343555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.343600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.343848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.344748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.344804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.344843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.344886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.345258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.345304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.345356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.345395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.345784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.346735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.346783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.346821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.346858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.347272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.347317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.347355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.347393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.347705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.428 [2024-07-25 12:12:43.348538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.348598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.348641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.348679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.349052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.349096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.349160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.349216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.349629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.350673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.350734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.352313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.352356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.352813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.352858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.353786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.353832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.354115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.356067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.356122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.357332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.357379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.358661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.358714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.359477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.359524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.359922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.362160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.362215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.362734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.362778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.364745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.364803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.365175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.365221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.365647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.367580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.367635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.369180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.369231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.370269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.370327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.370682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.370722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.370975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.373286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.373348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.374578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.374621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.375929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.375984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.376873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.376916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.377244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.378942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.379068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.380051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.380097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.381701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.381755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.382109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.382156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.382468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.384458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.384512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.384921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.384966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.385792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.385845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.386956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.387004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.387259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.389624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.389677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.390043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.390084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.392022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.392078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.393474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.393526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.393809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.394947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.395001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.395365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.395406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.397042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.397095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.398386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.398429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.398720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.400851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.400905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.401369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.401411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.402856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.402908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.404177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.404220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.404508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.406559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.406613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.406656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.408131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.408920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.408972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.409016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.409382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.409716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.410547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.429 [2024-07-25 12:12:43.410596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:30:57.688 00:30:57.688 Latency(us) 00:30:57.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.688 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:57.688 Verification LBA range: start 0x0 length 0x100 00:30:57.688 crypto_ram : 5.70 44.91 2.81 0.00 0.00 2764283.90 65431.14 2442762.65 00:30:57.688 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x100 length 0x100 00:30:57.689 crypto_ram : 5.67 45.13 2.82 0.00 0.00 2734653.44 82627.79 2348810.24 00:30:57.689 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x0 length 0x100 00:30:57.689 crypto_ram2 : 5.70 44.90 2.81 0.00 0.00 2672287.74 65011.71 2442762.65 00:30:57.689 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x100 length 0x100 00:30:57.689 crypto_ram2 : 5.69 47.63 2.98 0.00 0.00 2538191.09 8808.04 2295123.15 00:30:57.689 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x0 length 0x100 00:30:57.689 crypto_ram3 : 5.54 302.89 18.93 0.00 0.00 380048.94 2136.47 543581.80 00:30:57.689 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x100 length 0x100 00:30:57.689 crypto_ram3 : 5.56 318.80 19.92 0.00 0.00 362576.38 58300.83 546937.24 00:30:57.689 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x0 length 0x100 00:30:57.689 crypto_ram4 : 5.62 320.44 20.03 0.00 0.00 350457.81 14470.35 456340.28 00:30:57.689 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:57.689 Verification LBA range: start 0x100 length 0x100 00:30:57.689 crypto_ram4 : 5.62 334.38 20.90 0.00 0.00 336969.27 16462.64 466406.60 00:30:57.689 =================================================================================================================== 00:30:57.689 Total : 1459.08 91.19 0.00 0.00 651725.65 2136.47 2442762.65 00:30:58.256 00:30:58.256 real 0m8.762s 00:30:58.256 user 0m16.695s 00:30:58.256 sys 0m0.373s 00:30:58.256 12:12:44 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:58.256 12:12:44 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.256 ************************************ 00:30:58.256 END TEST bdev_verify_big_io 00:30:58.256 ************************************ 00:30:58.256 12:12:44 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:58.256 12:12:44 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:30:58.256 12:12:44 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:58.256 12:12:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:58.256 ************************************ 00:30:58.256 START TEST bdev_write_zeroes 00:30:58.256 ************************************ 00:30:58.256 12:12:44 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:58.256 [2024-07-25 12:12:44.220860] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:30:58.256 [2024-07-25 12:12:44.220915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127129 ] 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:58.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.256 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:58.256 [2024-07-25 12:12:44.351703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.515 [2024-07-25 12:12:44.437329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.515 [2024-07-25 12:12:44.458569] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:58.515 [2024-07-25 12:12:44.466594] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:58.515 [2024-07-25 12:12:44.474612] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:58.515 [2024-07-25 12:12:44.583469] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:01.050 [2024-07-25 12:12:46.752450] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:01.050 [2024-07-25 12:12:46.752511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:01.050 [2024-07-25 12:12:46.752524] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.050 [2024-07-25 12:12:46.760468] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:01.050 [2024-07-25 12:12:46.760487] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:01.050 [2024-07-25 12:12:46.760498] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.050 [2024-07-25 12:12:46.768503] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:01.050 [2024-07-25 12:12:46.768520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:01.050 [2024-07-25 12:12:46.768531] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.050 [2024-07-25 12:12:46.776509] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:01.050 [2024-07-25 12:12:46.776525] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:01.050 [2024-07-25 12:12:46.776536] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.050 Running I/O for 1 seconds... 00:31:01.986 00:31:01.986 Latency(us) 00:31:01.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.986 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:01.986 crypto_ram : 1.02 2118.61 8.28 0.00 0.00 60021.29 4980.74 71722.60 00:31:01.986 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:01.986 crypto_ram2 : 1.02 2124.37 8.30 0.00 0.00 59549.01 4954.52 66689.43 00:31:01.986 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:01.986 crypto_ram3 : 1.02 16279.71 63.59 0.00 0.00 7756.43 2280.65 10013.90 00:31:01.986 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:01.986 crypto_ram4 : 1.02 16317.08 63.74 0.00 0.00 7716.81 2280.65 8074.04 00:31:01.986 =================================================================================================================== 00:31:01.986 Total : 36839.77 143.91 0.00 0.00 13756.00 2280.65 71722.60 00:31:02.246 00:31:02.246 real 0m4.060s 00:31:02.246 user 0m3.692s 00:31:02.246 sys 0m0.324s 00:31:02.246 12:12:48 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.246 12:12:48 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:02.246 ************************************ 00:31:02.246 END TEST bdev_write_zeroes 00:31:02.246 ************************************ 00:31:02.246 12:12:48 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:02.246 12:12:48 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:02.246 12:12:48 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.246 12:12:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:02.246 ************************************ 00:31:02.246 START TEST bdev_json_nonenclosed 00:31:02.246 ************************************ 00:31:02.246 12:12:48 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:02.505 [2024-07-25 12:12:48.368760] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:02.505 [2024-07-25 12:12:48.368823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127923 ] 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.505 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:02.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:02.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.506 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:02.506 [2024-07-25 12:12:48.500239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.506 [2024-07-25 12:12:48.583042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.506 [2024-07-25 12:12:48.583105] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:02.506 [2024-07-25 12:12:48.583121] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:02.506 [2024-07-25 12:12:48.583132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:02.765 00:31:02.765 real 0m0.358s 00:31:02.765 user 0m0.209s 00:31:02.765 sys 0m0.147s 00:31:02.765 12:12:48 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.765 12:12:48 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:02.765 ************************************ 00:31:02.765 END TEST bdev_json_nonenclosed 00:31:02.765 ************************************ 00:31:02.765 12:12:48 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:02.765 12:12:48 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:02.765 12:12:48 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.765 12:12:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:02.765 ************************************ 00:31:02.765 START TEST bdev_json_nonarray 00:31:02.765 ************************************ 00:31:02.765 12:12:48 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:02.765 [2024-07-25 12:12:48.814933] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:02.765 [2024-07-25 12:12:48.814992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127954 ] 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:03.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.024 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:03.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.025 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:03.025 [2024-07-25 12:12:48.944013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.025 [2024-07-25 12:12:49.027549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.025 [2024-07-25 12:12:49.027618] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:03.025 [2024-07-25 12:12:49.027634] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:03.025 [2024-07-25 12:12:49.027644] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:03.025 00:31:03.025 real 0m0.356s 00:31:03.025 user 0m0.210s 00:31:03.025 sys 0m0.144s 00:31:03.025 12:12:49 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.025 12:12:49 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:03.025 ************************************ 00:31:03.025 END TEST bdev_json_nonarray 00:31:03.025 ************************************ 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:31:03.284 12:12:49 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:31:03.284 00:31:03.284 real 1m11.041s 00:31:03.284 user 2m54.651s 00:31:03.284 sys 0m8.498s 00:31:03.284 12:12:49 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.284 12:12:49 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:03.284 ************************************ 00:31:03.284 END TEST blockdev_crypto_aesni 00:31:03.284 ************************************ 00:31:03.284 12:12:49 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:31:03.284 12:12:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:03.284 12:12:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.284 12:12:49 -- common/autotest_common.sh@10 -- # set +x 00:31:03.284 ************************************ 00:31:03.284 START TEST blockdev_crypto_sw 00:31:03.284 ************************************ 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:31:03.284 * Looking for test storage... 00:31:03.284 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=128094 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 128094 00:31:03.284 12:12:49 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 128094 ']' 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.284 12:12:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:03.543 [2024-07-25 12:12:49.430861] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:03.543 [2024-07-25 12:12:49.430908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128094 ] 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:03.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:03.543 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:03.543 [2024-07-25 12:12:49.549974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.543 [2024-07-25 12:12:49.636118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.479 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.479 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:31:04.479 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:31:04.479 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:31:04.479 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:31:04.479 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.479 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:04.479 Malloc0 00:31:04.479 Malloc1 00:31:04.479 true 00:31:04.479 true 00:31:04.479 true 00:31:04.479 [2024-07-25 12:12:50.525451] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:04.479 crypto_ram 00:31:04.479 [2024-07-25 12:12:50.533479] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:04.479 crypto_ram2 00:31:04.479 [2024-07-25 12:12:50.541501] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:04.479 crypto_ram3 00:31:04.479 [ 00:31:04.479 { 00:31:04.479 "name": "Malloc1", 00:31:04.479 "aliases": [ 00:31:04.479 "e7ff0e19-95b1-4e7c-812a-5c2b89af0e17" 00:31:04.479 ], 00:31:04.479 "product_name": "Malloc disk", 00:31:04.479 "block_size": 4096, 00:31:04.479 "num_blocks": 4096, 00:31:04.479 "uuid": "e7ff0e19-95b1-4e7c-812a-5c2b89af0e17", 00:31:04.479 "assigned_rate_limits": { 00:31:04.479 "rw_ios_per_sec": 0, 00:31:04.479 "rw_mbytes_per_sec": 0, 00:31:04.479 "r_mbytes_per_sec": 0, 00:31:04.479 "w_mbytes_per_sec": 0 00:31:04.479 }, 00:31:04.479 "claimed": true, 00:31:04.479 "claim_type": "exclusive_write", 00:31:04.479 "zoned": false, 00:31:04.479 "supported_io_types": { 00:31:04.479 "read": true, 00:31:04.479 "write": true, 00:31:04.479 "unmap": true, 00:31:04.479 "flush": true, 00:31:04.479 "reset": true, 00:31:04.479 "nvme_admin": false, 00:31:04.479 "nvme_io": false, 00:31:04.479 "nvme_io_md": false, 00:31:04.479 "write_zeroes": true, 00:31:04.479 "zcopy": true, 00:31:04.479 "get_zone_info": false, 00:31:04.479 "zone_management": false, 00:31:04.479 "zone_append": false, 00:31:04.479 "compare": false, 00:31:04.479 "compare_and_write": false, 00:31:04.479 "abort": true, 00:31:04.479 "seek_hole": false, 00:31:04.479 "seek_data": false, 00:31:04.479 "copy": true, 00:31:04.480 "nvme_iov_md": false 00:31:04.480 }, 00:31:04.480 "memory_domains": [ 00:31:04.480 { 00:31:04.480 "dma_device_id": "system", 00:31:04.480 "dma_device_type": 1 00:31:04.480 }, 00:31:04.480 { 00:31:04.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.480 "dma_device_type": 2 00:31:04.480 } 00:31:04.480 ], 00:31:04.480 "driver_specific": {} 00:31:04.480 } 00:31:04.480 ] 00:31:04.480 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.480 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:31:04.480 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.480 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:04.480 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.480 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:31:04.480 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:31:04.480 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.480 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "2c388407-c71d-5eb0-922c-e44dd2e36c0b"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2c388407-c71d-5eb0-922c-e44dd2e36c0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f128d747-c759-5f9a-82c4-6d74dce48ff9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "f128d747-c759-5f9a-82c4-6d74dce48ff9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:31:04.739 12:12:50 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 128094 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 128094 ']' 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 128094 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:31:04.739 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:04.740 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128094 00:31:04.740 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:04.740 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:04.740 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128094' 00:31:04.740 killing process with pid 128094 00:31:04.740 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 128094 00:31:04.740 12:12:50 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 128094 00:31:05.308 12:12:51 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:05.308 12:12:51 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:05.308 12:12:51 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:05.308 12:12:51 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.308 12:12:51 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:05.308 ************************************ 00:31:05.308 START TEST bdev_hello_world 00:31:05.308 ************************************ 00:31:05.308 12:12:51 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:05.308 [2024-07-25 12:12:51.247756] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:05.308 [2024-07-25 12:12:51.247798] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128470 ] 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:05.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.308 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:05.308 [2024-07-25 12:12:51.362646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.568 [2024-07-25 12:12:51.447524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.568 [2024-07-25 12:12:51.616278] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:05.568 [2024-07-25 12:12:51.616332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:05.568 [2024-07-25 12:12:51.616346] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:05.568 [2024-07-25 12:12:51.624297] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:05.568 [2024-07-25 12:12:51.624314] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:05.568 [2024-07-25 12:12:51.624325] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:05.568 [2024-07-25 12:12:51.632318] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:05.568 [2024-07-25 12:12:51.632335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:05.568 [2024-07-25 12:12:51.632345] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:05.568 [2024-07-25 12:12:51.672006] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:05.568 [2024-07-25 12:12:51.672037] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:31:05.568 [2024-07-25 12:12:51.672053] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:05.568 [2024-07-25 12:12:51.673319] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:05.568 [2024-07-25 12:12:51.673385] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:05.568 [2024-07-25 12:12:51.673400] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:05.568 [2024-07-25 12:12:51.673432] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:05.568 00:31:05.568 [2024-07-25 12:12:51.673449] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:05.827 00:31:05.827 real 0m0.646s 00:31:05.827 user 0m0.434s 00:31:05.827 sys 0m0.196s 00:31:05.827 12:12:51 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.827 12:12:51 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:05.828 ************************************ 00:31:05.828 END TEST bdev_hello_world 00:31:05.828 ************************************ 00:31:05.828 12:12:51 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:31:05.828 12:12:51 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.828 12:12:51 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.828 12:12:51 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:05.828 ************************************ 00:31:05.828 START TEST bdev_bounds 00:31:05.828 ************************************ 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=128583 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 128583' 00:31:05.828 Process bdevio pid: 128583 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 128583 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 128583 ']' 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.828 12:12:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:06.087 [2024-07-25 12:12:51.997519] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:06.087 [2024-07-25 12:12:51.997578] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128583 ] 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.087 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:06.087 [2024-07-25 12:12:52.129956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.347 [2024-07-25 12:12:52.217370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.347 [2024-07-25 12:12:52.217463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.347 [2024-07-25 12:12:52.217469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.347 [2024-07-25 12:12:52.376736] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:06.347 [2024-07-25 12:12:52.376799] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:06.347 [2024-07-25 12:12:52.376812] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:06.347 [2024-07-25 12:12:52.384770] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:06.347 [2024-07-25 12:12:52.384792] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:06.347 [2024-07-25 12:12:52.384803] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:06.347 [2024-07-25 12:12:52.392780] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:06.347 [2024-07-25 12:12:52.392796] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:06.347 [2024-07-25 12:12:52.392807] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:06.915 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:06.915 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:31:06.915 12:12:52 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:06.915 I/O targets: 00:31:06.915 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:31:06.915 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:31:06.915 00:31:06.915 00:31:06.915 CUnit - A unit testing framework for C - Version 2.1-3 00:31:06.915 http://cunit.sourceforge.net/ 00:31:06.915 00:31:06.915 00:31:06.915 Suite: bdevio tests on: crypto_ram3 00:31:06.915 Test: blockdev write read block ...passed 00:31:06.915 Test: blockdev write zeroes read block ...passed 00:31:06.915 Test: blockdev write zeroes read no split ...passed 00:31:06.915 Test: blockdev write zeroes read split ...passed 00:31:06.915 Test: blockdev write zeroes read split partial ...passed 00:31:06.915 Test: blockdev reset ...passed 00:31:06.915 Test: blockdev write read 8 blocks ...passed 00:31:06.915 Test: blockdev write read size > 128k ...passed 00:31:06.915 Test: blockdev write read invalid size ...passed 00:31:06.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:06.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:06.915 Test: blockdev write read max offset ...passed 00:31:06.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:06.915 Test: blockdev writev readv 8 blocks ...passed 00:31:06.915 Test: blockdev writev readv 30 x 1block ...passed 00:31:06.915 Test: blockdev writev readv block ...passed 00:31:06.915 Test: blockdev writev readv size > 128k ...passed 00:31:06.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:06.915 Test: blockdev comparev and writev ...passed 00:31:06.915 Test: blockdev nvme passthru rw ...passed 00:31:06.915 Test: blockdev nvme passthru vendor specific ...passed 00:31:06.915 Test: blockdev nvme admin passthru ...passed 00:31:06.915 Test: blockdev copy ...passed 00:31:06.915 Suite: bdevio tests on: crypto_ram 00:31:06.915 Test: blockdev write read block ...passed 00:31:06.915 Test: blockdev write zeroes read block ...passed 00:31:06.915 Test: blockdev write zeroes read no split ...passed 00:31:06.915 Test: blockdev write zeroes read split ...passed 00:31:06.915 Test: blockdev write zeroes read split partial ...passed 00:31:06.915 Test: blockdev reset ...passed 00:31:06.915 Test: blockdev write read 8 blocks ...passed 00:31:06.915 Test: blockdev write read size > 128k ...passed 00:31:06.916 Test: blockdev write read invalid size ...passed 00:31:06.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:06.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:06.916 Test: blockdev write read max offset ...passed 00:31:06.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:06.916 Test: blockdev writev readv 8 blocks ...passed 00:31:06.916 Test: blockdev writev readv 30 x 1block ...passed 00:31:06.916 Test: blockdev writev readv block ...passed 00:31:06.916 Test: blockdev writev readv size > 128k ...passed 00:31:06.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:06.916 Test: blockdev comparev and writev ...passed 00:31:06.916 Test: blockdev nvme passthru rw ...passed 00:31:06.916 Test: blockdev nvme passthru vendor specific ...passed 00:31:06.916 Test: blockdev nvme admin passthru ...passed 00:31:06.916 Test: blockdev copy ...passed 00:31:06.916 00:31:06.916 Run Summary: Type Total Ran Passed Failed Inactive 00:31:06.916 suites 2 2 n/a 0 0 00:31:06.916 tests 46 46 46 0 0 00:31:06.916 asserts 260 260 260 0 n/a 00:31:06.916 00:31:06.916 Elapsed time = 0.079 seconds 00:31:06.916 0 00:31:06.916 12:12:52 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 128583 00:31:06.916 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 128583 ']' 00:31:06.916 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 128583 00:31:06.916 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:31:06.916 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:06.916 12:12:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128583 00:31:06.916 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:06.916 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:06.916 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128583' 00:31:06.916 killing process with pid 128583 00:31:06.916 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 128583 00:31:06.916 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 128583 00:31:07.175 12:12:53 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:31:07.175 00:31:07.175 real 0m1.279s 00:31:07.175 user 0m3.234s 00:31:07.175 sys 0m0.354s 00:31:07.175 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.175 12:12:53 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:07.175 ************************************ 00:31:07.175 END TEST bdev_bounds 00:31:07.175 ************************************ 00:31:07.175 12:12:53 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:31:07.175 12:12:53 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:07.175 12:12:53 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.175 12:12:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:07.436 ************************************ 00:31:07.436 START TEST bdev_nbd 00:31:07.436 ************************************ 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=128874 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 128874 /var/tmp/spdk-nbd.sock 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 128874 ']' 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:07.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.436 12:12:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:07.436 [2024-07-25 12:12:53.372025] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:07.436 [2024-07-25 12:12:53.372082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.436 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:07.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.436 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:07.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.436 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:07.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:07.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:07.437 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:07.437 [2024-07-25 12:12:53.507235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.696 [2024-07-25 12:12:53.589300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.696 [2024-07-25 12:12:53.756039] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:07.696 [2024-07-25 12:12:53.756104] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:07.696 [2024-07-25 12:12:53.756118] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:07.696 [2024-07-25 12:12:53.764058] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:07.696 [2024-07-25 12:12:53.764079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:07.696 [2024-07-25 12:12:53.764090] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:07.696 [2024-07-25 12:12:53.772080] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:07.696 [2024-07-25 12:12:53.772097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:07.696 [2024-07-25 12:12:53.772107] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:08.264 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:08.523 1+0 records in 00:31:08.523 1+0 records out 00:31:08.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220796 s, 18.6 MB/s 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:08.523 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:08.782 1+0 records in 00:31:08.782 1+0 records out 00:31:08.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031586 s, 13.0 MB/s 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:08.782 12:12:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:09.041 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:09.041 { 00:31:09.041 "nbd_device": "/dev/nbd0", 00:31:09.041 "bdev_name": "crypto_ram" 00:31:09.041 }, 00:31:09.041 { 00:31:09.041 "nbd_device": "/dev/nbd1", 00:31:09.041 "bdev_name": "crypto_ram3" 00:31:09.041 } 00:31:09.041 ]' 00:31:09.041 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:09.041 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:09.041 { 00:31:09.041 "nbd_device": "/dev/nbd0", 00:31:09.041 "bdev_name": "crypto_ram" 00:31:09.041 }, 00:31:09.041 { 00:31:09.041 "nbd_device": "/dev/nbd1", 00:31:09.041 "bdev_name": "crypto_ram3" 00:31:09.041 } 00:31:09.041 ]' 00:31:09.041 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:09.041 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:09.041 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:09.042 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:09.042 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:09.042 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:09.042 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:09.042 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:09.301 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:09.560 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:09.819 12:12:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:31:10.078 /dev/nbd0 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:10.078 1+0 records in 00:31:10.078 1+0 records out 00:31:10.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267005 s, 15.3 MB/s 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:10.078 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:31:10.338 /dev/nbd1 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:10.338 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:10.339 1+0 records in 00:31:10.339 1+0 records out 00:31:10.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033163 s, 12.4 MB/s 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:10.339 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:10.598 { 00:31:10.598 "nbd_device": "/dev/nbd0", 00:31:10.598 "bdev_name": "crypto_ram" 00:31:10.598 }, 00:31:10.598 { 00:31:10.598 "nbd_device": "/dev/nbd1", 00:31:10.598 "bdev_name": "crypto_ram3" 00:31:10.598 } 00:31:10.598 ]' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:10.598 { 00:31:10.598 "nbd_device": "/dev/nbd0", 00:31:10.598 "bdev_name": "crypto_ram" 00:31:10.598 }, 00:31:10.598 { 00:31:10.598 "nbd_device": "/dev/nbd1", 00:31:10.598 "bdev_name": "crypto_ram3" 00:31:10.598 } 00:31:10.598 ]' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:10.598 /dev/nbd1' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:10.598 /dev/nbd1' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:10.598 256+0 records in 00:31:10.598 256+0 records out 00:31:10.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111334 s, 94.2 MB/s 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:10.598 256+0 records in 00:31:10.598 256+0 records out 00:31:10.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028783 s, 36.4 MB/s 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:10.598 256+0 records in 00:31:10.598 256+0 records out 00:31:10.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0428789 s, 24.5 MB/s 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:10.598 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:10.857 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:10.857 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:10.857 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:10.857 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:10.858 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:10.858 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:10.858 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:10.858 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:10.858 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:10.858 12:12:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:11.117 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:11.375 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:11.634 malloc_lvol_verify 00:31:11.634 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:11.894 ddad71f3-3ed3-4583-bc48-f1f87e0534fc 00:31:11.894 12:12:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:12.153 50a9a01f-38aa-4c47-9f08-a1a0ab3f6224 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:12.153 /dev/nbd0 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:12.153 mke2fs 1.46.5 (30-Dec-2021) 00:31:12.153 Discarding device blocks: 0/4096 done 00:31:12.153 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:12.153 00:31:12.153 Allocating group tables: 0/1 done 00:31:12.153 Writing inode tables: 0/1 done 00:31:12.153 Creating journal (1024 blocks): done 00:31:12.153 Writing superblocks and filesystem accounting information: 0/1 done 00:31:12.153 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:12.153 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 128874 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 128874 ']' 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 128874 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:12.412 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128874 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128874' 00:31:12.671 killing process with pid 128874 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 128874 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 128874 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:31:12.671 00:31:12.671 real 0m5.453s 00:31:12.671 user 0m7.609s 00:31:12.671 sys 0m2.270s 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.671 12:12:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:12.671 ************************************ 00:31:12.671 END TEST bdev_nbd 00:31:12.671 ************************************ 00:31:12.931 12:12:58 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:31:12.931 12:12:58 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:31:12.931 12:12:58 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:31:12.931 12:12:58 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:31:12.931 12:12:58 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:12.931 12:12:58 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.931 12:12:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:12.931 ************************************ 00:31:12.931 START TEST bdev_fio 00:31:12.931 ************************************ 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:12.931 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:12.931 ************************************ 00:31:12.931 START TEST bdev_fio_rw_verify 00:31:12.931 ************************************ 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:12.931 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:12.932 12:12:58 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:12.932 12:12:59 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:12.932 12:12:59 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:12.932 12:12:59 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:12.932 12:12:59 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:13.531 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:13.531 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:13.531 fio-3.35 00:31:13.531 Starting 2 threads 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.531 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:13.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:13.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:13.532 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:25.724 00:31:25.724 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=130200: Thu Jul 25 12:13:09 2024 00:31:25.724 read: IOPS=21.4k, BW=83.7MiB/s (87.7MB/s)(837MiB/10001msec) 00:31:25.724 slat (usec): min=8, max=261, avg=22.96, stdev=10.44 00:31:25.724 clat (usec): min=5, max=851, avg=159.29, stdev=90.20 00:31:25.724 lat (usec): min=16, max=927, avg=182.25, stdev=96.29 00:31:25.724 clat percentiles (usec): 00:31:25.724 | 50.000th=[ 141], 99.000th=[ 388], 99.900th=[ 441], 99.990th=[ 717], 00:31:25.724 | 99.999th=[ 824] 00:31:25.724 write: IOPS=25.7k, BW=101MiB/s (105MB/s)(954MiB/9489msec); 0 zone resets 00:31:25.724 slat (usec): min=8, max=753, avg=34.48, stdev=12.52 00:31:25.724 clat (usec): min=20, max=1188, avg=201.87, stdev=119.50 00:31:25.724 lat (usec): min=38, max=1224, avg=236.36, stdev=127.04 00:31:25.724 clat percentiles (usec): 00:31:25.724 | 50.000th=[ 180], 99.000th=[ 523], 99.900th=[ 594], 99.990th=[ 668], 00:31:25.724 | 99.999th=[ 1106] 00:31:25.724 bw ( KiB/s): min=84616, max=145752, per=94.99%, avg=97836.63, stdev=6851.01, samples=38 00:31:25.724 iops : min=21154, max=36438, avg=24459.16, stdev=1712.75, samples=38 00:31:25.724 lat (usec) : 10=0.01%, 20=0.01%, 50=7.17%, 100=17.37%, 250=52.19% 00:31:25.724 lat (usec) : 500=22.31%, 750=0.96%, 1000=0.01% 00:31:25.725 lat (msec) : 2=0.01% 00:31:25.725 cpu : usr=99.64%, sys=0.01%, ctx=59, majf=0, minf=453 00:31:25.725 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.725 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.725 issued rwts: total=214179,244339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.725 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:25.725 00:31:25.725 Run status group 0 (all jobs): 00:31:25.725 READ: bw=83.7MiB/s (87.7MB/s), 83.7MiB/s-83.7MiB/s (87.7MB/s-87.7MB/s), io=837MiB (877MB), run=10001-10001msec 00:31:25.725 WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=954MiB (1001MB), run=9489-9489msec 00:31:25.725 00:31:25.725 real 0m11.235s 00:31:25.725 user 0m31.098s 00:31:25.725 sys 0m0.376s 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:31:25.725 ************************************ 00:31:25.725 END TEST bdev_fio_rw_verify 00:31:25.725 ************************************ 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "2c388407-c71d-5eb0-922c-e44dd2e36c0b"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2c388407-c71d-5eb0-922c-e44dd2e36c0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f128d747-c759-5f9a-82c4-6d74dce48ff9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "f128d747-c759-5f9a-82c4-6d74dce48ff9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:31:25.725 crypto_ram3 ]] 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "2c388407-c71d-5eb0-922c-e44dd2e36c0b"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2c388407-c71d-5eb0-922c-e44dd2e36c0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f128d747-c759-5f9a-82c4-6d74dce48ff9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "f128d747-c759-5f9a-82c4-6d74dce48ff9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:25.725 ************************************ 00:31:25.725 START TEST bdev_fio_trim 00:31:25.725 ************************************ 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.725 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:25.726 12:13:10 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:25.726 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:25.726 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:25.726 fio-3.35 00:31:25.726 Starting 2 threads 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:25.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.726 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:35.689 00:31:35.689 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=132194: Thu Jul 25 12:13:21 2024 00:31:35.689 write: IOPS=49.8k, BW=194MiB/s (204MB/s)(1944MiB/10001msec); 0 zone resets 00:31:35.689 slat (usec): min=9, max=1710, avg=17.42, stdev= 6.42 00:31:35.689 clat (usec): min=15, max=1840, avg=132.40, stdev=95.57 00:31:35.689 lat (usec): min=31, max=1853, avg=149.81, stdev=100.89 00:31:35.689 clat percentiles (usec): 00:31:35.689 | 50.000th=[ 87], 99.000th=[ 318], 99.900th=[ 359], 99.990th=[ 400], 00:31:35.689 | 99.999th=[ 988] 00:31:35.689 bw ( KiB/s): min=192824, max=202552, per=100.00%, avg=199080.00, stdev=1033.00, samples=38 00:31:35.689 iops : min=48206, max=50638, avg=49770.00, stdev=258.25, samples=38 00:31:35.689 trim: IOPS=49.8k, BW=194MiB/s (204MB/s)(1944MiB/10001msec); 0 zone resets 00:31:35.689 slat (nsec): min=3678, max=89048, avg=8012.08, stdev=2908.86 00:31:35.689 clat (usec): min=31, max=1853, avg=88.19, stdev=28.49 00:31:35.689 lat (usec): min=37, max=1861, avg=96.20, stdev=29.67 00:31:35.689 clat percentiles (usec): 00:31:35.689 | 50.000th=[ 87], 99.000th=[ 151], 99.900th=[ 165], 99.990th=[ 186], 00:31:35.689 | 99.999th=[ 758] 00:31:35.689 bw ( KiB/s): min=192824, max=202552, per=100.00%, avg=199081.68, stdev=1033.47, samples=38 00:31:35.689 iops : min=48206, max=50638, avg=49770.42, stdev=258.37, samples=38 00:31:35.689 lat (usec) : 20=0.01%, 50=13.53%, 100=49.20%, 250=26.97%, 500=10.31% 00:31:35.689 lat (usec) : 1000=0.01% 00:31:35.689 lat (msec) : 2=0.01% 00:31:35.689 cpu : usr=99.67%, sys=0.00%, ctx=26, majf=0, minf=273 00:31:35.689 IO depths : 1=8.4%, 2=18.8%, 4=58.3%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.689 complete : 0=0.0%, 4=87.3%, 8=12.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.689 issued rwts: total=0,497649,497649,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.689 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.689 00:31:35.689 Run status group 0 (all jobs): 00:31:35.689 WRITE: bw=194MiB/s (204MB/s), 194MiB/s-194MiB/s (204MB/s-204MB/s), io=1944MiB (2038MB), run=10001-10001msec 00:31:35.689 TRIM: bw=194MiB/s (204MB/s), 194MiB/s-194MiB/s (204MB/s-204MB/s), io=1944MiB (2038MB), run=10001-10001msec 00:31:35.689 00:31:35.689 real 0m11.177s 00:31:35.689 user 0m31.201s 00:31:35.689 sys 0m0.353s 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:31:35.689 ************************************ 00:31:35.689 END TEST bdev_fio_trim 00:31:35.689 ************************************ 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:31:35.689 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:31:35.689 00:31:35.689 real 0m22.763s 00:31:35.689 user 1m2.472s 00:31:35.689 sys 0m0.925s 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:35.689 12:13:21 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:35.689 ************************************ 00:31:35.689 END TEST bdev_fio 00:31:35.689 ************************************ 00:31:35.689 12:13:21 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:35.689 12:13:21 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:35.689 12:13:21 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:35.689 12:13:21 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:35.689 12:13:21 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:35.689 ************************************ 00:31:35.690 START TEST bdev_verify 00:31:35.690 ************************************ 00:31:35.690 12:13:21 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:35.690 [2024-07-25 12:13:21.742696] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:35.690 [2024-07-25 12:13:21.742752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133822 ] 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:35.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.948 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:35.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:35.949 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:35.949 [2024-07-25 12:13:21.873975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:35.949 [2024-07-25 12:13:21.958690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.949 [2024-07-25 12:13:21.958696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.207 [2024-07-25 12:13:22.119732] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:36.207 [2024-07-25 12:13:22.119793] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:36.207 [2024-07-25 12:13:22.119807] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:36.207 [2024-07-25 12:13:22.127752] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:36.207 [2024-07-25 12:13:22.127769] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:36.207 [2024-07-25 12:13:22.127780] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:36.207 [2024-07-25 12:13:22.135775] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:36.207 [2024-07-25 12:13:22.135792] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:36.207 [2024-07-25 12:13:22.135802] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:36.207 Running I/O for 5 seconds... 00:31:41.517 00:31:41.517 Latency(us) 00:31:41.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.517 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:41.517 Verification LBA range: start 0x0 length 0x800 00:31:41.517 crypto_ram : 5.03 5832.11 22.78 0.00 0.00 21859.16 1481.11 25899.83 00:31:41.517 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:41.517 Verification LBA range: start 0x800 length 0x800 00:31:41.517 crypto_ram : 5.03 5833.07 22.79 0.00 0.00 21855.90 1703.94 26004.68 00:31:41.517 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:41.517 Verification LBA range: start 0x0 length 0x800 00:31:41.517 crypto_ram3 : 5.03 2924.57 11.42 0.00 0.00 43535.39 1743.26 31667.00 00:31:41.517 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:41.517 Verification LBA range: start 0x800 length 0x800 00:31:41.517 crypto_ram3 : 5.03 2925.06 11.43 0.00 0.00 43524.42 1966.08 31667.00 00:31:41.517 =================================================================================================================== 00:31:41.517 Total : 17514.80 68.42 0.00 0.00 29102.65 1481.11 31667.00 00:31:41.517 00:31:41.517 real 0m5.747s 00:31:41.517 user 0m10.851s 00:31:41.517 sys 0m0.221s 00:31:41.517 12:13:27 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.517 12:13:27 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:31:41.517 ************************************ 00:31:41.517 END TEST bdev_verify 00:31:41.517 ************************************ 00:31:41.517 12:13:27 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:41.517 12:13:27 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:41.517 12:13:27 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.517 12:13:27 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:41.517 ************************************ 00:31:41.517 START TEST bdev_verify_big_io 00:31:41.517 ************************************ 00:31:41.517 12:13:27 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:41.517 [2024-07-25 12:13:27.575859] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:41.517 [2024-07-25 12:13:27.575918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134864 ] 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:41.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:41.776 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:41.776 [2024-07-25 12:13:27.708402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:41.776 [2024-07-25 12:13:27.791977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.776 [2024-07-25 12:13:27.791982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.035 [2024-07-25 12:13:27.949779] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:42.035 [2024-07-25 12:13:27.949841] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:42.035 [2024-07-25 12:13:27.949854] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:42.035 [2024-07-25 12:13:27.957803] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:42.035 [2024-07-25 12:13:27.957821] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:42.035 [2024-07-25 12:13:27.957832] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:42.035 [2024-07-25 12:13:27.965826] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:42.035 [2024-07-25 12:13:27.965843] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:42.035 [2024-07-25 12:13:27.965853] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:42.035 Running I/O for 5 seconds... 00:31:47.333 00:31:47.333 Latency(us) 00:31:47.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.333 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:47.333 Verification LBA range: start 0x0 length 0x80 00:31:47.333 crypto_ram : 5.15 447.13 27.95 0.00 0.00 279659.39 6239.03 362387.87 00:31:47.333 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:47.333 Verification LBA range: start 0x80 length 0x80 00:31:47.333 crypto_ram : 5.16 446.71 27.92 0.00 0.00 279911.73 5242.88 364065.59 00:31:47.333 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:47.333 Verification LBA range: start 0x0 length 0x80 00:31:47.333 crypto_ram3 : 5.34 239.90 14.99 0.00 0.00 502801.25 5845.81 385875.97 00:31:47.333 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:47.333 Verification LBA range: start 0x80 length 0x80 00:31:47.333 crypto_ram3 : 5.34 239.66 14.98 0.00 0.00 503309.23 5242.88 380842.80 00:31:47.333 =================================================================================================================== 00:31:47.333 Total : 1373.40 85.84 0.00 0.00 359524.73 5242.88 385875.97 00:31:47.591 00:31:47.591 real 0m6.062s 00:31:47.591 user 0m11.477s 00:31:47.591 sys 0m0.224s 00:31:47.591 12:13:33 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.591 12:13:33 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:47.591 ************************************ 00:31:47.591 END TEST bdev_verify_big_io 00:31:47.591 ************************************ 00:31:47.591 12:13:33 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:47.591 12:13:33 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:47.591 12:13:33 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:47.591 12:13:33 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:47.591 ************************************ 00:31:47.591 START TEST bdev_write_zeroes 00:31:47.591 ************************************ 00:31:47.591 12:13:33 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:47.848 [2024-07-25 12:13:33.723826] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:47.848 [2024-07-25 12:13:33.723880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135916 ] 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.848 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:47.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:47.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:47.849 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:47.849 [2024-07-25 12:13:33.856705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.849 [2024-07-25 12:13:33.939351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.106 [2024-07-25 12:13:34.105596] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:48.106 [2024-07-25 12:13:34.105650] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:48.106 [2024-07-25 12:13:34.105664] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:48.106 [2024-07-25 12:13:34.113614] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:48.106 [2024-07-25 12:13:34.113632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:48.106 [2024-07-25 12:13:34.113643] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:48.106 [2024-07-25 12:13:34.121635] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:48.106 [2024-07-25 12:13:34.121651] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:48.106 [2024-07-25 12:13:34.121662] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:48.106 Running I/O for 1 seconds... 00:31:49.476 00:31:49.476 Latency(us) 00:31:49.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.476 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:49.476 crypto_ram : 1.01 28597.27 111.71 0.00 0.00 4465.11 1939.87 6212.81 00:31:49.476 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:49.476 crypto_ram3 : 1.01 14328.10 55.97 0.00 0.00 8871.66 3119.51 9227.47 00:31:49.476 =================================================================================================================== 00:31:49.476 Total : 42925.37 167.68 0.00 0.00 5938.31 1939.87 9227.47 00:31:49.476 00:31:49.476 real 0m1.705s 00:31:49.476 user 0m1.466s 00:31:49.476 sys 0m0.220s 00:31:49.476 12:13:35 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.476 12:13:35 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:49.476 ************************************ 00:31:49.476 END TEST bdev_write_zeroes 00:31:49.476 ************************************ 00:31:49.476 12:13:35 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:49.476 12:13:35 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:49.476 12:13:35 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.476 12:13:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:49.476 ************************************ 00:31:49.476 START TEST bdev_json_nonenclosed 00:31:49.476 ************************************ 00:31:49.476 12:13:35 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:49.476 [2024-07-25 12:13:35.515007] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:49.476 [2024-07-25 12:13:35.515060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136214 ] 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.476 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:49.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:49.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.477 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:49.735 [2024-07-25 12:13:35.645220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.735 [2024-07-25 12:13:35.727817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.735 [2024-07-25 12:13:35.727881] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:49.735 [2024-07-25 12:13:35.727897] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:49.735 [2024-07-25 12:13:35.727908] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:49.735 00:31:49.735 real 0m0.354s 00:31:49.735 user 0m0.201s 00:31:49.735 sys 0m0.151s 00:31:49.735 12:13:35 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.735 12:13:35 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:49.735 ************************************ 00:31:49.735 END TEST bdev_json_nonenclosed 00:31:49.735 ************************************ 00:31:49.993 12:13:35 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:49.993 12:13:35 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:49.993 12:13:35 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.993 12:13:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:49.993 ************************************ 00:31:49.993 START TEST bdev_json_nonarray 00:31:49.993 ************************************ 00:31:49.993 12:13:35 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:49.993 [2024-07-25 12:13:35.958135] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:49.993 [2024-07-25 12:13:35.958193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136241 ] 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:49.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:49.993 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:49.993 [2024-07-25 12:13:36.089891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.250 [2024-07-25 12:13:36.173053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.250 [2024-07-25 12:13:36.173123] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:50.250 [2024-07-25 12:13:36.173143] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:50.250 [2024-07-25 12:13:36.173154] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:50.250 00:31:50.250 real 0m0.358s 00:31:50.250 user 0m0.200s 00:31:50.250 sys 0m0.157s 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:50.250 ************************************ 00:31:50.250 END TEST bdev_json_nonarray 00:31:50.250 ************************************ 00:31:50.250 12:13:36 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:31:50.250 12:13:36 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:31:50.250 12:13:36 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:31:50.250 12:13:36 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:31:50.250 12:13:36 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:50.250 12:13:36 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:50.250 12:13:36 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:50.250 ************************************ 00:31:50.250 START TEST bdev_crypto_enomem 00:31:50.250 ************************************ 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=136267 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 136267 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 136267 ']' 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:50.250 12:13:36 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:50.509 [2024-07-25 12:13:36.405856] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:50.509 [2024-07-25 12:13:36.405912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136267 ] 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:50.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:50.509 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:50.509 [2024-07-25 12:13:36.526066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.509 [2024-07-25 12:13:36.610400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:51.441 true 00:31:51.441 base0 00:31:51.441 true 00:31:51.441 [2024-07-25 12:13:37.331277] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:51.441 crypt0 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:31:51.441 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.442 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:51.442 [ 00:31:51.442 { 00:31:51.442 "name": "crypt0", 00:31:51.442 "aliases": [ 00:31:51.442 "cf140a8c-5eea-59e2-bf3f-2bea1d4a2611" 00:31:51.442 ], 00:31:51.442 "product_name": "crypto", 00:31:51.442 "block_size": 512, 00:31:51.442 "num_blocks": 2097152, 00:31:51.442 "uuid": "cf140a8c-5eea-59e2-bf3f-2bea1d4a2611", 00:31:51.442 "assigned_rate_limits": { 00:31:51.442 "rw_ios_per_sec": 0, 00:31:51.442 "rw_mbytes_per_sec": 0, 00:31:51.442 "r_mbytes_per_sec": 0, 00:31:51.442 "w_mbytes_per_sec": 0 00:31:51.442 }, 00:31:51.442 "claimed": false, 00:31:51.442 "zoned": false, 00:31:51.442 "supported_io_types": { 00:31:51.442 "read": true, 00:31:51.442 "write": true, 00:31:51.442 "unmap": false, 00:31:51.442 "flush": false, 00:31:51.442 "reset": true, 00:31:51.442 "nvme_admin": false, 00:31:51.442 "nvme_io": false, 00:31:51.442 "nvme_io_md": false, 00:31:51.442 "write_zeroes": true, 00:31:51.442 "zcopy": false, 00:31:51.442 "get_zone_info": false, 00:31:51.442 "zone_management": false, 00:31:51.442 "zone_append": false, 00:31:51.442 "compare": false, 00:31:51.442 "compare_and_write": false, 00:31:51.442 "abort": false, 00:31:51.442 "seek_hole": false, 00:31:51.442 "seek_data": false, 00:31:51.442 "copy": false, 00:31:51.442 "nvme_iov_md": false 00:31:51.442 }, 00:31:51.442 "memory_domains": [ 00:31:51.442 { 00:31:51.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.442 "dma_device_type": 2 00:31:51.442 } 00:31:51.442 ], 00:31:51.442 "driver_specific": { 00:31:51.442 "crypto": { 00:31:51.442 "base_bdev_name": "EE_base0", 00:31:51.442 "name": "crypt0", 00:31:51.442 "key_name": "test_dek_sw" 00:31:51.442 } 00:31:51.442 } 00:31:51.442 } 00:31:51.442 ] 00:31:51.442 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.442 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:31:51.442 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=136522 00:31:51.442 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:31:51.442 12:13:37 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:51.442 Running I/O for 5 seconds... 00:31:52.373 12:13:38 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:31:52.373 12:13:38 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.373 12:13:38 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:52.373 12:13:38 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.373 12:13:38 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 136522 00:31:56.547 00:31:56.547 Latency(us) 00:31:56.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.547 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:31:56.548 crypt0 : 5.00 39051.81 152.55 0.00 0.00 815.92 388.30 1146.88 00:31:56.548 =================================================================================================================== 00:31:56.548 Total : 39051.81 152.55 0.00 0.00 815.92 388.30 1146.88 00:31:56.548 0 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 136267 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 136267 ']' 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 136267 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136267 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136267' 00:31:56.548 killing process with pid 136267 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 136267 00:31:56.548 Received shutdown signal, test time was about 5.000000 seconds 00:31:56.548 00:31:56.548 Latency(us) 00:31:56.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.548 =================================================================================================================== 00:31:56.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:56.548 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 136267 00:31:56.806 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:31:56.806 00:31:56.806 real 0m6.404s 00:31:56.806 user 0m6.647s 00:31:56.806 sys 0m0.361s 00:31:56.806 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:56.806 12:13:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:56.806 ************************************ 00:31:56.806 END TEST bdev_crypto_enomem 00:31:56.806 ************************************ 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:31:56.806 12:13:42 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:31:56.806 00:31:56.806 real 0m53.552s 00:31:56.806 user 1m46.877s 00:31:56.806 sys 0m6.275s 00:31:56.806 12:13:42 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:56.806 12:13:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:56.806 ************************************ 00:31:56.806 END TEST blockdev_crypto_sw 00:31:56.806 ************************************ 00:31:56.806 12:13:42 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:31:56.806 12:13:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:56.806 12:13:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:56.806 12:13:42 -- common/autotest_common.sh@10 -- # set +x 00:31:56.806 ************************************ 00:31:56.806 START TEST blockdev_crypto_qat 00:31:56.806 ************************************ 00:31:56.806 12:13:42 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:31:57.064 * Looking for test storage... 00:31:57.064 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:57.064 12:13:42 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=137402 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:57.064 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 137402 00:31:57.064 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 137402 ']' 00:31:57.064 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.064 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:57.064 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.064 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:57.064 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:57.064 [2024-07-25 12:13:43.087291] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:57.064 [2024-07-25 12:13:43.087355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137402 ] 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:57.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.064 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:57.322 [2024-07-25 12:13:43.213760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.322 [2024-07-25 12:13:43.302736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.886 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.886 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:31:57.886 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:31:57.886 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:31:57.886 12:13:43 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:31:57.886 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.886 12:13:43 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:57.886 [2024-07-25 12:13:43.904640] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:57.886 [2024-07-25 12:13:43.912673] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:57.886 [2024-07-25 12:13:43.920690] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:57.886 [2024-07-25 12:13:43.990012] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:00.444 true 00:32:00.444 true 00:32:00.444 true 00:32:00.444 true 00:32:00.444 Malloc0 00:32:00.444 Malloc1 00:32:00.444 Malloc2 00:32:00.444 Malloc3 00:32:00.445 [2024-07-25 12:13:46.306057] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:00.445 crypto_ram 00:32:00.445 [2024-07-25 12:13:46.314073] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:00.445 crypto_ram1 00:32:00.445 [2024-07-25 12:13:46.322096] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:00.445 crypto_ram2 00:32:00.445 [2024-07-25 12:13:46.330114] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:00.445 crypto_ram3 00:32:00.445 [ 00:32:00.445 { 00:32:00.445 "name": "Malloc1", 00:32:00.445 "aliases": [ 00:32:00.445 "b713ae6d-b0de-4515-a7d4-e620f8030a0b" 00:32:00.445 ], 00:32:00.445 "product_name": "Malloc disk", 00:32:00.445 "block_size": 512, 00:32:00.445 "num_blocks": 65536, 00:32:00.445 "uuid": "b713ae6d-b0de-4515-a7d4-e620f8030a0b", 00:32:00.445 "assigned_rate_limits": { 00:32:00.445 "rw_ios_per_sec": 0, 00:32:00.445 "rw_mbytes_per_sec": 0, 00:32:00.445 "r_mbytes_per_sec": 0, 00:32:00.445 "w_mbytes_per_sec": 0 00:32:00.445 }, 00:32:00.445 "claimed": true, 00:32:00.445 "claim_type": "exclusive_write", 00:32:00.445 "zoned": false, 00:32:00.445 "supported_io_types": { 00:32:00.445 "read": true, 00:32:00.445 "write": true, 00:32:00.445 "unmap": true, 00:32:00.445 "flush": true, 00:32:00.445 "reset": true, 00:32:00.445 "nvme_admin": false, 00:32:00.445 "nvme_io": false, 00:32:00.445 "nvme_io_md": false, 00:32:00.445 "write_zeroes": true, 00:32:00.445 "zcopy": true, 00:32:00.445 "get_zone_info": false, 00:32:00.445 "zone_management": false, 00:32:00.445 "zone_append": false, 00:32:00.445 "compare": false, 00:32:00.445 "compare_and_write": false, 00:32:00.445 "abort": true, 00:32:00.445 "seek_hole": false, 00:32:00.445 "seek_data": false, 00:32:00.445 "copy": true, 00:32:00.445 "nvme_iov_md": false 00:32:00.445 }, 00:32:00.445 "memory_domains": [ 00:32:00.445 { 00:32:00.445 "dma_device_id": "system", 00:32:00.445 "dma_device_type": 1 00:32:00.445 }, 00:32:00.445 { 00:32:00.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.445 "dma_device_type": 2 00:32:00.445 } 00:32:00.445 ], 00:32:00.445 "driver_specific": {} 00:32:00.445 } 00:32:00.445 ] 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:00.445 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:32:00.445 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "02104307-47aa-57c2-b810-ffae89b1319e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "02104307-47aa-57c2-b810-ffae89b1319e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "482c2f5e-eea6-5879-9a1e-f905a88dc45c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "482c2f5e-eea6-5879-9a1e-f905a88dc45c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "ac66796e-85ea-5fcd-9a0c-3f9cb93442f1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ac66796e-85ea-5fcd-9a0c-3f9cb93442f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "17f9dcc5-a33c-5f1c-a86f-55fe1f5d4865"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "17f9dcc5-a33c-5f1c-a86f-55fe1f5d4865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:32:00.703 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:32:00.703 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:32:00.703 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:32:00.703 12:13:46 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 137402 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 137402 ']' 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 137402 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137402 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137402' 00:32:00.703 killing process with pid 137402 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 137402 00:32:00.703 12:13:46 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 137402 00:32:01.268 12:13:47 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:01.268 12:13:47 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:01.268 12:13:47 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:01.268 12:13:47 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.268 12:13:47 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:01.268 ************************************ 00:32:01.268 START TEST bdev_hello_world 00:32:01.268 ************************************ 00:32:01.268 12:13:47 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:01.268 [2024-07-25 12:13:47.205726] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:01.268 [2024-07-25 12:13:47.205781] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138190 ] 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:01.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.268 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:01.268 [2024-07-25 12:13:47.337340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.526 [2024-07-25 12:13:47.420580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.526 [2024-07-25 12:13:47.441817] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:01.526 [2024-07-25 12:13:47.449845] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:01.526 [2024-07-25 12:13:47.457863] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:01.526 [2024-07-25 12:13:47.562040] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:04.050 [2024-07-25 12:13:49.737056] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:04.050 [2024-07-25 12:13:49.737117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:04.050 [2024-07-25 12:13:49.737131] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:04.050 [2024-07-25 12:13:49.745075] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:04.050 [2024-07-25 12:13:49.745093] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:04.050 [2024-07-25 12:13:49.745104] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:04.050 [2024-07-25 12:13:49.753094] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:04.050 [2024-07-25 12:13:49.753112] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:04.050 [2024-07-25 12:13:49.753122] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:04.050 [2024-07-25 12:13:49.761116] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:04.050 [2024-07-25 12:13:49.761133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:04.050 [2024-07-25 12:13:49.761150] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:04.050 [2024-07-25 12:13:49.832877] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:04.050 [2024-07-25 12:13:49.832918] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:32:04.050 [2024-07-25 12:13:49.832936] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:04.050 [2024-07-25 12:13:49.834105] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:04.050 [2024-07-25 12:13:49.834175] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:04.050 [2024-07-25 12:13:49.834191] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:04.050 [2024-07-25 12:13:49.834230] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:04.050 00:32:04.050 [2024-07-25 12:13:49.834247] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:04.050 00:32:04.050 real 0m2.997s 00:32:04.050 user 0m2.624s 00:32:04.050 sys 0m0.325s 00:32:04.050 12:13:50 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.050 12:13:50 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:04.050 ************************************ 00:32:04.050 END TEST bdev_hello_world 00:32:04.050 ************************************ 00:32:04.308 12:13:50 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:32:04.308 12:13:50 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:04.308 12:13:50 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.308 12:13:50 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:04.308 ************************************ 00:32:04.308 START TEST bdev_bounds 00:32:04.308 ************************************ 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=138733 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 138733' 00:32:04.308 Process bdevio pid: 138733 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 138733 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 138733 ']' 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:04.308 12:13:50 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:04.308 [2024-07-25 12:13:50.327874] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:04.308 [2024-07-25 12:13:50.328004] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138733 ] 00:32:04.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.565 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:04.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.565 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:04.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:04.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:04.566 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:04.566 [2024-07-25 12:13:50.531504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:04.566 [2024-07-25 12:13:50.615893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.566 [2024-07-25 12:13:50.615988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:04.566 [2024-07-25 12:13:50.615992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.566 [2024-07-25 12:13:50.637295] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:04.566 [2024-07-25 12:13:50.645326] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:04.566 [2024-07-25 12:13:50.653343] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:04.823 [2024-07-25 12:13:50.754174] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:07.345 [2024-07-25 12:13:52.918268] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:07.345 [2024-07-25 12:13:52.918348] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:07.345 [2024-07-25 12:13:52.918362] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:07.345 [2024-07-25 12:13:52.926283] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:07.345 [2024-07-25 12:13:52.926300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:07.345 [2024-07-25 12:13:52.926311] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:07.345 [2024-07-25 12:13:52.934307] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:07.345 [2024-07-25 12:13:52.934322] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:07.345 [2024-07-25 12:13:52.934333] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:07.345 [2024-07-25 12:13:52.942331] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:07.345 [2024-07-25 12:13:52.942347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:07.345 [2024-07-25 12:13:52.942358] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:07.345 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:07.345 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:32:07.345 12:13:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:07.345 I/O targets: 00:32:07.345 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:32:07.345 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:32:07.345 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:32:07.345 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:32:07.345 00:32:07.345 00:32:07.345 CUnit - A unit testing framework for C - Version 2.1-3 00:32:07.345 http://cunit.sourceforge.net/ 00:32:07.345 00:32:07.345 00:32:07.345 Suite: bdevio tests on: crypto_ram3 00:32:07.345 Test: blockdev write read block ...passed 00:32:07.345 Test: blockdev write zeroes read block ...passed 00:32:07.345 Test: blockdev write zeroes read no split ...passed 00:32:07.345 Test: blockdev write zeroes read split ...passed 00:32:07.345 Test: blockdev write zeroes read split partial ...passed 00:32:07.345 Test: blockdev reset ...passed 00:32:07.345 Test: blockdev write read 8 blocks ...passed 00:32:07.345 Test: blockdev write read size > 128k ...passed 00:32:07.345 Test: blockdev write read invalid size ...passed 00:32:07.345 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:07.345 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:07.345 Test: blockdev write read max offset ...passed 00:32:07.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:07.345 Test: blockdev writev readv 8 blocks ...passed 00:32:07.345 Test: blockdev writev readv 30 x 1block ...passed 00:32:07.345 Test: blockdev writev readv block ...passed 00:32:07.345 Test: blockdev writev readv size > 128k ...passed 00:32:07.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:07.345 Test: blockdev comparev and writev ...passed 00:32:07.345 Test: blockdev nvme passthru rw ...passed 00:32:07.345 Test: blockdev nvme passthru vendor specific ...passed 00:32:07.345 Test: blockdev nvme admin passthru ...passed 00:32:07.345 Test: blockdev copy ...passed 00:32:07.345 Suite: bdevio tests on: crypto_ram2 00:32:07.345 Test: blockdev write read block ...passed 00:32:07.345 Test: blockdev write zeroes read block ...passed 00:32:07.345 Test: blockdev write zeroes read no split ...passed 00:32:07.345 Test: blockdev write zeroes read split ...passed 00:32:07.345 Test: blockdev write zeroes read split partial ...passed 00:32:07.345 Test: blockdev reset ...passed 00:32:07.345 Test: blockdev write read 8 blocks ...passed 00:32:07.345 Test: blockdev write read size > 128k ...passed 00:32:07.345 Test: blockdev write read invalid size ...passed 00:32:07.345 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:07.345 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:07.345 Test: blockdev write read max offset ...passed 00:32:07.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:07.345 Test: blockdev writev readv 8 blocks ...passed 00:32:07.345 Test: blockdev writev readv 30 x 1block ...passed 00:32:07.345 Test: blockdev writev readv block ...passed 00:32:07.345 Test: blockdev writev readv size > 128k ...passed 00:32:07.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:07.345 Test: blockdev comparev and writev ...passed 00:32:07.345 Test: blockdev nvme passthru rw ...passed 00:32:07.345 Test: blockdev nvme passthru vendor specific ...passed 00:32:07.345 Test: blockdev nvme admin passthru ...passed 00:32:07.345 Test: blockdev copy ...passed 00:32:07.345 Suite: bdevio tests on: crypto_ram1 00:32:07.345 Test: blockdev write read block ...passed 00:32:07.345 Test: blockdev write zeroes read block ...passed 00:32:07.345 Test: blockdev write zeroes read no split ...passed 00:32:07.345 Test: blockdev write zeroes read split ...passed 00:32:07.345 Test: blockdev write zeroes read split partial ...passed 00:32:07.345 Test: blockdev reset ...passed 00:32:07.345 Test: blockdev write read 8 blocks ...passed 00:32:07.345 Test: blockdev write read size > 128k ...passed 00:32:07.345 Test: blockdev write read invalid size ...passed 00:32:07.345 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:07.345 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:07.345 Test: blockdev write read max offset ...passed 00:32:07.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:07.345 Test: blockdev writev readv 8 blocks ...passed 00:32:07.345 Test: blockdev writev readv 30 x 1block ...passed 00:32:07.345 Test: blockdev writev readv block ...passed 00:32:07.345 Test: blockdev writev readv size > 128k ...passed 00:32:07.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:07.345 Test: blockdev comparev and writev ...passed 00:32:07.345 Test: blockdev nvme passthru rw ...passed 00:32:07.345 Test: blockdev nvme passthru vendor specific ...passed 00:32:07.345 Test: blockdev nvme admin passthru ...passed 00:32:07.345 Test: blockdev copy ...passed 00:32:07.345 Suite: bdevio tests on: crypto_ram 00:32:07.345 Test: blockdev write read block ...passed 00:32:07.345 Test: blockdev write zeroes read block ...passed 00:32:07.345 Test: blockdev write zeroes read no split ...passed 00:32:07.345 Test: blockdev write zeroes read split ...passed 00:32:07.345 Test: blockdev write zeroes read split partial ...passed 00:32:07.345 Test: blockdev reset ...passed 00:32:07.345 Test: blockdev write read 8 blocks ...passed 00:32:07.345 Test: blockdev write read size > 128k ...passed 00:32:07.345 Test: blockdev write read invalid size ...passed 00:32:07.345 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:07.345 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:07.345 Test: blockdev write read max offset ...passed 00:32:07.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:07.345 Test: blockdev writev readv 8 blocks ...passed 00:32:07.345 Test: blockdev writev readv 30 x 1block ...passed 00:32:07.345 Test: blockdev writev readv block ...passed 00:32:07.345 Test: blockdev writev readv size > 128k ...passed 00:32:07.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:07.345 Test: blockdev comparev and writev ...passed 00:32:07.345 Test: blockdev nvme passthru rw ...passed 00:32:07.345 Test: blockdev nvme passthru vendor specific ...passed 00:32:07.345 Test: blockdev nvme admin passthru ...passed 00:32:07.345 Test: blockdev copy ...passed 00:32:07.345 00:32:07.345 Run Summary: Type Total Ran Passed Failed Inactive 00:32:07.345 suites 4 4 n/a 0 0 00:32:07.345 tests 92 92 92 0 0 00:32:07.345 asserts 520 520 520 0 n/a 00:32:07.345 00:32:07.346 Elapsed time = 0.495 seconds 00:32:07.346 0 00:32:07.346 12:13:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 138733 00:32:07.346 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 138733 ']' 00:32:07.346 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 138733 00:32:07.346 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:32:07.346 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.346 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138733 00:32:07.603 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:07.603 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:07.603 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138733' 00:32:07.603 killing process with pid 138733 00:32:07.603 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 138733 00:32:07.603 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 138733 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:32:07.862 00:32:07.862 real 0m3.562s 00:32:07.862 user 0m9.701s 00:32:07.862 sys 0m0.587s 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:07.862 ************************************ 00:32:07.862 END TEST bdev_bounds 00:32:07.862 ************************************ 00:32:07.862 12:13:53 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:32:07.862 12:13:53 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:07.862 12:13:53 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.862 12:13:53 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:07.862 ************************************ 00:32:07.862 START TEST bdev_nbd 00:32:07.862 ************************************ 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=139293 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 139293 /var/tmp/spdk-nbd.sock 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 139293 ']' 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:07.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.862 12:13:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:07.862 [2024-07-25 12:13:53.935761] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:07.862 [2024-07-25 12:13:53.935819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.120 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:08.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:08.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:08.121 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:08.121 [2024-07-25 12:13:54.067555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.121 [2024-07-25 12:13:54.150226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.121 [2024-07-25 12:13:54.171475] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:08.121 [2024-07-25 12:13:54.179497] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:08.121 [2024-07-25 12:13:54.187515] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:08.378 [2024-07-25 12:13:54.291966] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:10.903 [2024-07-25 12:13:56.457987] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:10.903 [2024-07-25 12:13:56.458055] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:10.903 [2024-07-25 12:13:56.458069] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:10.903 [2024-07-25 12:13:56.466006] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:10.903 [2024-07-25 12:13:56.466025] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:10.903 [2024-07-25 12:13:56.466037] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:10.903 [2024-07-25 12:13:56.474025] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:10.903 [2024-07-25 12:13:56.474042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:10.903 [2024-07-25 12:13:56.474056] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:10.903 [2024-07-25 12:13:56.482044] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:10.903 [2024-07-25 12:13:56.482061] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:10.903 [2024-07-25 12:13:56.482071] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:10.903 1+0 records in 00:32:10.903 1+0 records out 00:32:10.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287154 s, 14.3 MB/s 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:10.903 12:13:56 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:11.161 1+0 records in 00:32:11.161 1+0 records out 00:32:11.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336102 s, 12.2 MB/s 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:11.161 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:32:11.417 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:32:11.417 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:32:11.417 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:32:11.417 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:32:11.417 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:11.418 1+0 records in 00:32:11.418 1+0 records out 00:32:11.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029735 s, 13.8 MB/s 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:11.418 1+0 records in 00:32:11.418 1+0 records out 00:32:11.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325218 s, 12.6 MB/s 00:32:11.418 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:11.674 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:11.674 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:11.674 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd0", 00:32:11.675 "bdev_name": "crypto_ram" 00:32:11.675 }, 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd1", 00:32:11.675 "bdev_name": "crypto_ram1" 00:32:11.675 }, 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd2", 00:32:11.675 "bdev_name": "crypto_ram2" 00:32:11.675 }, 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd3", 00:32:11.675 "bdev_name": "crypto_ram3" 00:32:11.675 } 00:32:11.675 ]' 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd0", 00:32:11.675 "bdev_name": "crypto_ram" 00:32:11.675 }, 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd1", 00:32:11.675 "bdev_name": "crypto_ram1" 00:32:11.675 }, 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd2", 00:32:11.675 "bdev_name": "crypto_ram2" 00:32:11.675 }, 00:32:11.675 { 00:32:11.675 "nbd_device": "/dev/nbd3", 00:32:11.675 "bdev_name": "crypto_ram3" 00:32:11.675 } 00:32:11.675 ]' 00:32:11.675 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:11.933 12:13:57 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:12.190 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:12.191 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:12.447 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:12.704 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:12.962 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:12.962 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:12.962 12:13:58 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:12.962 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:32:13.220 /dev/nbd0 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:13.220 1+0 records in 00:32:13.220 1+0 records out 00:32:13.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308328 s, 13.3 MB/s 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:13.220 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:32:13.477 /dev/nbd1 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:13.477 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:13.477 1+0 records in 00:32:13.477 1+0 records out 00:32:13.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302636 s, 13.5 MB/s 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:13.478 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:32:13.734 /dev/nbd10 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:13.735 1+0 records in 00:32:13.735 1+0 records out 00:32:13.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319115 s, 12.8 MB/s 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:13.735 12:13:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:32:13.992 /dev/nbd11 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:13.992 1+0 records in 00:32:13.992 1+0 records out 00:32:13.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371042 s, 11.0 MB/s 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:13.992 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd0", 00:32:14.249 "bdev_name": "crypto_ram" 00:32:14.249 }, 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd1", 00:32:14.249 "bdev_name": "crypto_ram1" 00:32:14.249 }, 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd10", 00:32:14.249 "bdev_name": "crypto_ram2" 00:32:14.249 }, 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd11", 00:32:14.249 "bdev_name": "crypto_ram3" 00:32:14.249 } 00:32:14.249 ]' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd0", 00:32:14.249 "bdev_name": "crypto_ram" 00:32:14.249 }, 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd1", 00:32:14.249 "bdev_name": "crypto_ram1" 00:32:14.249 }, 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd10", 00:32:14.249 "bdev_name": "crypto_ram2" 00:32:14.249 }, 00:32:14.249 { 00:32:14.249 "nbd_device": "/dev/nbd11", 00:32:14.249 "bdev_name": "crypto_ram3" 00:32:14.249 } 00:32:14.249 ]' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:14.249 /dev/nbd1 00:32:14.249 /dev/nbd10 00:32:14.249 /dev/nbd11' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:14.249 /dev/nbd1 00:32:14.249 /dev/nbd10 00:32:14.249 /dev/nbd11' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:14.249 256+0 records in 00:32:14.249 256+0 records out 00:32:14.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109376 s, 95.9 MB/s 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:14.249 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:14.507 256+0 records in 00:32:14.507 256+0 records out 00:32:14.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0760361 s, 13.8 MB/s 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:14.507 256+0 records in 00:32:14.507 256+0 records out 00:32:14.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404516 s, 25.9 MB/s 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:32:14.507 256+0 records in 00:32:14.507 256+0 records out 00:32:14.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343738 s, 30.5 MB/s 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:32:14.507 256+0 records in 00:32:14.507 256+0 records out 00:32:14.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0501167 s, 20.9 MB/s 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:14.507 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:14.819 12:14:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:15.098 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:15.355 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:32:15.356 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:32:15.356 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:32:15.356 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:32:15.356 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:15.356 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:15.356 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:15.612 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:15.870 12:14:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:16.128 malloc_lvol_verify 00:32:16.128 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:16.128 cceebe33-9836-45ca-bf43-fd0ebd210065 00:32:16.128 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:16.385 a4eee215-b61b-4664-a973-87ccb7f8be46 00:32:16.385 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:16.643 /dev/nbd0 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:16.643 mke2fs 1.46.5 (30-Dec-2021) 00:32:16.643 Discarding device blocks: 0/4096 done 00:32:16.643 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:16.643 00:32:16.643 Allocating group tables: 0/1 done 00:32:16.643 Writing inode tables: 0/1 done 00:32:16.643 Creating journal (1024 blocks): done 00:32:16.643 Writing superblocks and filesystem accounting information: 0/1 done 00:32:16.643 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:16.643 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 139293 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 139293 ']' 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 139293 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.900 12:14:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139293 00:32:16.900 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:16.900 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:16.900 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139293' 00:32:16.900 killing process with pid 139293 00:32:16.900 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 139293 00:32:16.900 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 139293 00:32:17.465 12:14:03 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:17.465 00:32:17.465 real 0m9.650s 00:32:17.465 user 0m12.458s 00:32:17.465 sys 0m3.655s 00:32:17.465 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.465 12:14:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:17.465 ************************************ 00:32:17.465 END TEST bdev_nbd 00:32:17.465 ************************************ 00:32:17.465 12:14:03 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:32:17.465 12:14:03 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:32:17.465 12:14:03 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:32:17.465 12:14:03 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:32:17.465 12:14:03 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:17.465 12:14:03 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:17.465 12:14:03 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:17.732 ************************************ 00:32:17.732 START TEST bdev_fio 00:32:17.732 ************************************ 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:17.732 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:17.732 ************************************ 00:32:17.732 START TEST bdev_fio_rw_verify 00:32:17.732 ************************************ 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:17.732 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:17.733 12:14:03 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:18.306 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.306 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.306 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.306 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.306 fio-3.35 00:32:18.306 Starting 4 threads 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:18.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.306 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:33.176 00:32:33.176 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=141732: Thu Jul 25 12:14:16 2024 00:32:33.176 read: IOPS=23.7k, BW=92.6MiB/s (97.1MB/s)(926MiB/10001msec) 00:32:33.176 slat (usec): min=15, max=255, avg=57.58, stdev=35.85 00:32:33.176 clat (usec): min=26, max=1433, avg=324.39, stdev=210.60 00:32:33.176 lat (usec): min=72, max=1534, avg=381.96, stdev=229.52 00:32:33.176 clat percentiles (usec): 00:32:33.176 | 50.000th=[ 260], 99.000th=[ 1004], 99.900th=[ 1172], 99.990th=[ 1270], 00:32:33.176 | 99.999th=[ 1401] 00:32:33.176 write: IOPS=26.2k, BW=102MiB/s (107MB/s)(994MiB/9725msec); 0 zone resets 00:32:33.176 slat (usec): min=21, max=437, avg=68.74, stdev=35.13 00:32:33.176 clat (usec): min=20, max=2393, avg=360.03, stdev=217.51 00:32:33.176 lat (usec): min=54, max=2574, avg=428.77, stdev=235.28 00:32:33.176 clat percentiles (usec): 00:32:33.176 | 50.000th=[ 310], 99.000th=[ 1057], 99.900th=[ 1221], 99.990th=[ 1467], 00:32:33.176 | 99.999th=[ 2278] 00:32:33.176 bw ( KiB/s): min=87728, max=137128, per=97.25%, avg=101786.11, stdev=2779.35, samples=76 00:32:33.176 iops : min=21932, max=34282, avg=25446.53, stdev=694.84, samples=76 00:32:33.176 lat (usec) : 50=0.01%, 100=4.57%, 250=37.44%, 500=38.56%, 750=13.21% 00:32:33.176 lat (usec) : 1000=4.83% 00:32:33.176 lat (msec) : 2=1.39%, 4=0.01% 00:32:33.176 cpu : usr=99.63%, sys=0.00%, ctx=47, majf=0, minf=274 00:32:33.176 IO depths : 1=3.2%, 2=27.7%, 4=55.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:33.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.176 complete : 0=0.0%, 4=87.8%, 8=12.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.176 issued rwts: total=237001,254478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:33.176 00:32:33.176 Run status group 0 (all jobs): 00:32:33.176 READ: bw=92.6MiB/s (97.1MB/s), 92.6MiB/s-92.6MiB/s (97.1MB/s-97.1MB/s), io=926MiB (971MB), run=10001-10001msec 00:32:33.176 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=994MiB (1042MB), run=9725-9725msec 00:32:33.176 00:32:33.176 real 0m13.464s 00:32:33.176 user 0m53.782s 00:32:33.176 sys 0m0.507s 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:33.176 ************************************ 00:32:33.176 END TEST bdev_fio_rw_verify 00:32:33.176 ************************************ 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "02104307-47aa-57c2-b810-ffae89b1319e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "02104307-47aa-57c2-b810-ffae89b1319e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "482c2f5e-eea6-5879-9a1e-f905a88dc45c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "482c2f5e-eea6-5879-9a1e-f905a88dc45c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "ac66796e-85ea-5fcd-9a0c-3f9cb93442f1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ac66796e-85ea-5fcd-9a0c-3f9cb93442f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "17f9dcc5-a33c-5f1c-a86f-55fe1f5d4865"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "17f9dcc5-a33c-5f1c-a86f-55fe1f5d4865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:32:33.176 crypto_ram1 00:32:33.176 crypto_ram2 00:32:33.176 crypto_ram3 ]] 00:32:33.176 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "02104307-47aa-57c2-b810-ffae89b1319e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "02104307-47aa-57c2-b810-ffae89b1319e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "482c2f5e-eea6-5879-9a1e-f905a88dc45c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "482c2f5e-eea6-5879-9a1e-f905a88dc45c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "ac66796e-85ea-5fcd-9a0c-3f9cb93442f1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ac66796e-85ea-5fcd-9a0c-3f9cb93442f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "17f9dcc5-a33c-5f1c-a86f-55fe1f5d4865"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "17f9dcc5-a33c-5f1c-a86f-55fe1f5d4865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:33.177 ************************************ 00:32:33.177 START TEST bdev_fio_trim 00:32:33.177 ************************************ 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:33.177 12:14:17 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:33.177 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:33.177 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:33.177 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:33.177 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:33.177 fio-3.35 00:32:33.177 Starting 4 threads 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:33.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.177 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:33.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:33.178 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:45.417 00:32:45.417 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=143995: Thu Jul 25 12:14:30 2024 00:32:45.417 write: IOPS=42.3k, BW=165MiB/s (173MB/s)(1652MiB/10001msec); 0 zone resets 00:32:45.417 slat (usec): min=16, max=440, avg=56.40, stdev=36.52 00:32:45.417 clat (usec): min=43, max=1280, avg=198.87, stdev=123.24 00:32:45.417 lat (usec): min=66, max=1468, avg=255.27, stdev=144.72 00:32:45.417 clat percentiles (usec): 00:32:45.417 | 50.000th=[ 174], 99.000th=[ 644], 99.900th=[ 750], 99.990th=[ 807], 00:32:45.417 | 99.999th=[ 1172] 00:32:45.417 bw ( KiB/s): min=159843, max=195744, per=100.00%, avg=169382.79, stdev=2768.54, samples=76 00:32:45.417 iops : min=39960, max=48936, avg=42345.47, stdev=692.17, samples=76 00:32:45.417 trim: IOPS=42.3k, BW=165MiB/s (173MB/s)(1652MiB/10001msec); 0 zone resets 00:32:45.417 slat (usec): min=5, max=149, avg=14.33, stdev= 6.07 00:32:45.417 clat (usec): min=39, max=1469, avg=255.44, stdev=144.74 00:32:45.417 lat (usec): min=59, max=1500, avg=269.77, stdev=147.15 00:32:45.417 clat percentiles (usec): 00:32:45.417 | 50.000th=[ 219], 99.000th=[ 758], 99.900th=[ 898], 99.990th=[ 979], 00:32:45.417 | 99.999th=[ 1401] 00:32:45.417 bw ( KiB/s): min=159843, max=195744, per=100.00%, avg=169382.79, stdev=2768.54, samples=76 00:32:45.417 iops : min=39960, max=48936, avg=42345.47, stdev=692.17, samples=76 00:32:45.417 lat (usec) : 50=1.29%, 100=12.00%, 250=55.62%, 500=25.56%, 750=4.97% 00:32:45.417 lat (usec) : 1000=0.56% 00:32:45.417 lat (msec) : 2=0.01% 00:32:45.417 cpu : usr=99.58%, sys=0.01%, ctx=45, majf=0, minf=106 00:32:45.417 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.417 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.417 issued rwts: total=0,422793,422795,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:45.417 00:32:45.417 Run status group 0 (all jobs): 00:32:45.417 WRITE: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=1652MiB (1732MB), run=10001-10001msec 00:32:45.417 TRIM: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=1652MiB (1732MB), run=10001-10001msec 00:32:45.417 00:32:45.417 real 0m13.484s 00:32:45.417 user 0m53.638s 00:32:45.417 sys 0m0.534s 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:45.417 ************************************ 00:32:45.417 END TEST bdev_fio_trim 00:32:45.417 ************************************ 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:32:45.417 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:45.417 00:32:45.417 real 0m27.335s 00:32:45.417 user 1m47.619s 00:32:45.417 sys 0m1.250s 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.417 12:14:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:45.417 ************************************ 00:32:45.417 END TEST bdev_fio 00:32:45.417 ************************************ 00:32:45.417 12:14:30 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:45.417 12:14:30 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:45.417 12:14:30 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:45.417 12:14:30 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.417 12:14:30 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:45.417 ************************************ 00:32:45.417 START TEST bdev_verify 00:32:45.417 ************************************ 00:32:45.417 12:14:31 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:45.417 [2024-07-25 12:14:31.078264] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:45.417 [2024-07-25 12:14:31.078320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145859 ] 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:45.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.417 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:45.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.418 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:45.418 [2024-07-25 12:14:31.210261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:45.418 [2024-07-25 12:14:31.294984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.418 [2024-07-25 12:14:31.294990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.418 [2024-07-25 12:14:31.316323] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:45.418 [2024-07-25 12:14:31.324354] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:45.418 [2024-07-25 12:14:31.332372] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:45.418 [2024-07-25 12:14:31.437739] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:47.946 [2024-07-25 12:14:33.604943] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:47.946 [2024-07-25 12:14:33.605017] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:47.946 [2024-07-25 12:14:33.605031] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.947 [2024-07-25 12:14:33.612959] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:47.947 [2024-07-25 12:14:33.612977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:47.947 [2024-07-25 12:14:33.612987] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.947 [2024-07-25 12:14:33.620980] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:47.947 [2024-07-25 12:14:33.621000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:47.947 [2024-07-25 12:14:33.621011] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.947 [2024-07-25 12:14:33.629003] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:47.947 [2024-07-25 12:14:33.629020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:47.947 [2024-07-25 12:14:33.629030] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.947 Running I/O for 5 seconds... 00:32:53.259 00:32:53.259 Latency(us) 00:32:53.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.259 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x0 length 0x1000 00:32:53.259 crypto_ram : 5.08 529.36 2.07 0.00 0.00 241245.03 4980.74 176999.63 00:32:53.259 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x1000 length 0x1000 00:32:53.259 crypto_ram : 5.08 529.53 2.07 0.00 0.00 241130.68 6684.67 176160.77 00:32:53.259 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x0 length 0x1000 00:32:53.259 crypto_ram1 : 5.08 529.14 2.07 0.00 0.00 240308.20 5478.81 160222.41 00:32:53.259 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x1000 length 0x1000 00:32:53.259 crypto_ram1 : 5.08 529.42 2.07 0.00 0.00 240231.73 7235.17 159383.55 00:32:53.259 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x0 length 0x1000 00:32:53.259 crypto_ram2 : 5.06 4109.20 16.05 0.00 0.00 30798.76 3407.87 28521.27 00:32:53.259 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x1000 length 0x1000 00:32:53.259 crypto_ram2 : 5.06 4125.64 16.12 0.00 0.00 30692.12 8336.18 28521.27 00:32:53.259 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x0 length 0x1000 00:32:53.259 crypto_ram3 : 5.07 4116.49 16.08 0.00 0.00 30681.77 3460.30 28940.70 00:32:53.259 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:53.259 Verification LBA range: start 0x1000 length 0x1000 00:32:53.259 crypto_ram3 : 5.07 4139.79 16.17 0.00 0.00 30507.52 3132.62 28730.98 00:32:53.259 =================================================================================================================== 00:32:53.259 Total : 18608.57 72.69 0.00 0.00 54627.57 3132.62 176999.63 00:32:53.259 00:32:53.259 real 0m8.118s 00:32:53.259 user 0m15.440s 00:32:53.259 sys 0m0.336s 00:32:53.259 12:14:39 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.259 12:14:39 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:53.259 ************************************ 00:32:53.259 END TEST bdev_verify 00:32:53.259 ************************************ 00:32:53.259 12:14:39 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:53.259 12:14:39 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:53.259 12:14:39 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.259 12:14:39 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:53.259 ************************************ 00:32:53.259 START TEST bdev_verify_big_io 00:32:53.259 ************************************ 00:32:53.259 12:14:39 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:53.259 [2024-07-25 12:14:39.270446] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:53.259 [2024-07-25 12:14:39.270500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147196 ] 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:53.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:53.259 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:53.516 [2024-07-25 12:14:39.401580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:53.516 [2024-07-25 12:14:39.484562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.516 [2024-07-25 12:14:39.484566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.516 [2024-07-25 12:14:39.505991] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:53.516 [2024-07-25 12:14:39.514010] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:53.516 [2024-07-25 12:14:39.522035] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:53.516 [2024-07-25 12:14:39.629228] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:56.036 [2024-07-25 12:14:41.784352] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:56.036 [2024-07-25 12:14:41.784436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:56.036 [2024-07-25 12:14:41.784450] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.036 [2024-07-25 12:14:41.792382] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:56.036 [2024-07-25 12:14:41.792400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:56.036 [2024-07-25 12:14:41.792411] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.036 [2024-07-25 12:14:41.800391] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:56.036 [2024-07-25 12:14:41.800408] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:56.036 [2024-07-25 12:14:41.800418] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.037 [2024-07-25 12:14:41.808415] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:56.037 [2024-07-25 12:14:41.808431] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:56.037 [2024-07-25 12:14:41.808441] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.037 Running I/O for 5 seconds... 00:32:56.602 [2024-07-25 12:14:42.610711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.611733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.615784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.616207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.616223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.602 [2024-07-25 12:14:42.619433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.619492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.619533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.619586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.620005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.620045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.620083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.620119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.620541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.620558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.623674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.623717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.623766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.623813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.624291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.624346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.624386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.624423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.624829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.624845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.627856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.627898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.627936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.627977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.628413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.628454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.628493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.628530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.628926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.628941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.632900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.633277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.633292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.636978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.637015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.637409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.637428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.640480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.640524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.640562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.640618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.641096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.641137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.641181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.641219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.641618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.641634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.644587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.644628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.644670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.644710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.645165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.645207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.645246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.645284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.645675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.645690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.648709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.648752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.648794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.648831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.649265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.649306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.649346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.649385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.649718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.649734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.652685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.652728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.603 [2024-07-25 12:14:42.652765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.652806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.653223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.653297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.653351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.653390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.653721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.653736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.656782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.656826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.656864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.656917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.657286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.657339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.657376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.657431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.657777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.657793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.661619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.662013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.662028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.665774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.666175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.666192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.668954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.668996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.669033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.669087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.669529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.669571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.669608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.669646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.670054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.670072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.672914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.672956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.672994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.673956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.676787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.676830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.676868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.676912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.677368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.677409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.677448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.677498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.677920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.677936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.680841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.680885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.680922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.680963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.681399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.681439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.681478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.681516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.681906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.681923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.684615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.684673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.684723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.684761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.685223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.604 [2024-07-25 12:14:42.685264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.685302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.685339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.685743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.685759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.688465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.688508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.688546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.688585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.689018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.689059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.689097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.689135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.689442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.689458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.692881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.693309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.693324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.696931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.697375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.697390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.700579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.700636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.700684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.700723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.701104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.701150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.701188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.701226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.701607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.701622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.704923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.705326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.705343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.707903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.707946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.707982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.708040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.708517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.708559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.708596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.708635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.709031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.709047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.711784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.711827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.711864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.711902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.712382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.712426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.712464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.712502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.712901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.712917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.715606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.715648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.715686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.715723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.716156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.605 [2024-07-25 12:14:42.716210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.716261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.716309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.716728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.716747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.718538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.718594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.718633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.718672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.718956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.718998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.719043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.719091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.719341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.606 [2024-07-25 12:14:42.719357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.721639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.721711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.721750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.721789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.722270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.722312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.722351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.722388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.722779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.722794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.724596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.724642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.724679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.724716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.725048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.725087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.725129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.725170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.725417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.725432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.727837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.727888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.727926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.727963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.728397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.728450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.728489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.728527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.728859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.728874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.730593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.730634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.730671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.730698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.731022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.731061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.731098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.731135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.731386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.731401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.734134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.734502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.736014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.737586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.739298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.739985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.741204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.742683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.742934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.742949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.745645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.747061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.748361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.749814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.750767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.752082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.753535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.754987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.755242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.755257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.758820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.760049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.761514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.762987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.764936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.865 [2024-07-25 12:14:42.766454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.768007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.768728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.769058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.769074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.772936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.774192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.775649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.777149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.778957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.780422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.781885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.783064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.783480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.783496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.787152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.788591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.790032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.790717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.792376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.793927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.795440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.795798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.796204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.796222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.799883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.801348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.802241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.803782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.805484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.806954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.807397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.807754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.808178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.808194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.811705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.812987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.814271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.815503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.817234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.818005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.818365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.818721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.819146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.819163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.822371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.823180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.824416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.825875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.827343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.827705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.828063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.828429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.828846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.828864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.831122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.832451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.833906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.835368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.835975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.836337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.836693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.837048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.837414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.837430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.840347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.841584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.843041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.844520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.845346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.845705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.846063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.846564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.846813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.846832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.849674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.851127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.852591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.853204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.853978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.854341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.854701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.866 [2024-07-25 12:14:42.856274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.856530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.856545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.859645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.861111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.862116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.862483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.863239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.863597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.864795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.866013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.866270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.866286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.869469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.870907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.871270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.871627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.872373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.873162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.874392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.875836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.876088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.876107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.879162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.879574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.879931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.880293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.881049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.882420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.883886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.885364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.885616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.885631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.888012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.888381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.888738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.889095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.890968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.892354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.893853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.895428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.895774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.895789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.897683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.898042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.898407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.898766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.900218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.901665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.903103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.903793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.904045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.904061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.906079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.906446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.906805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.907983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.909769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.911232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.912083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.913600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.913851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.913867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.916079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.916444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.917268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.918493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.920258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.921326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.922813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.924179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.924431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.924446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.867 [2024-07-25 12:14:42.926889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.927563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.928789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.930238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.931906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.933058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.934283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.935747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.935998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.936013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.938658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.939928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.941386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.942839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.943988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.945209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.946655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.948116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.948472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.948488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.952586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.954150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.955604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.956899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.958405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.959875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.961337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.962164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.962600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.962615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.965942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.967405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.968964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.969889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.971616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.973090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.974207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.974565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.974984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.975001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.978328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.979790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.980484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:56.868 [2024-07-25 12:14:42.981735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.983498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.984650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.985009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.985372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.985790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.985807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.988923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.989687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.990938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.992392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.994017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.994381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.994741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.995100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.995523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.995540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.997757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:42.999187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.000699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.002171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.002780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.003144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.003501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.003858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.004152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.004168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.007005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.008305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.009767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.011338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.012050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.012416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.129 [2024-07-25 12:14:43.012775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.013479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.013762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.013778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.016518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.017958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.019421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.020194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.020953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.021316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.021673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.023090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.023346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.023361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.026298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.026787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.027161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.027519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.028293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.028672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.029033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.029398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.029797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.029813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.032303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.032662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.033024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.033389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.034079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.034439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.034797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.035161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.035561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.035578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.038103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.038472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.038837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.039200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.039922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.040289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.040668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.041026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.041425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.041442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.043949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.044317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.044674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.044711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.045493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.045857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.046222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.046578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.047045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.047061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.049533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.049895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.050260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.050636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.050686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.051060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.051429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.051785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.052144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.052508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.052836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.052853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.055879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.056314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.056330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.058639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.058693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.058731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.058769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.059102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.130 [2024-07-25 12:14:43.059163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.059203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.059241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.059278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.059700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.059720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.061867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.061921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.061959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.062949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.065822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.066228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.066244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.068467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.068509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.068547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.068584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.068999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.069048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.069087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.069127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.069175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.069567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.069583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.071725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.071768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.071806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.071844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.072882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.075772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.076187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.076203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.078372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.078432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.078474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.078512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.078938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.078988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.079027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.079065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.079103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.079470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.079486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.081664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.081705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.081743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.081781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.131 [2024-07-25 12:14:43.082715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.084863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.084905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.084943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.084983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.085900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.088780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.089254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.089270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.091610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.091664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.091708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.091747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.092712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.094956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.095728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.096115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.096131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.098874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.099270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.099287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.101531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.101572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.101610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.101648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.102616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.104723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.104767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.104808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.104847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.105895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.108106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.108154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.108193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.108234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.108618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.132 [2024-07-25 12:14:43.108666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.108704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.108742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.108779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.109194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.109212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.111499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.111541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.111579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.111617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.112522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.114748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.114790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.114828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.114867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.115752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.117826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.117867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.117909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.117947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.118932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.120972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.121008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.121306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.121322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.122962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.123662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.124088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.124104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.125631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.125672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.125709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.125745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.126487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.127999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.133 [2024-07-25 12:14:43.128740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.129189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.129205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.130849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.130890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.130930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.130975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.131707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.133892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.134278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.134295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.136982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.137002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.138420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.138460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.138501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.138537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.139596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.141970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.142006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.142254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.142269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.143818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.143868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.143906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.134 [2024-07-25 12:14:43.143943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.144801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.146824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.146872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.148895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.149145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.149161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.150613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.150653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.150690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.151355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.151803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.151851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.151890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.151928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.151969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.152338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.152354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.155644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.157019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.158204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.159438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.159686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.161180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.161991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.162352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.162708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.163134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.163156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.166132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.166871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.168094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.169549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.169795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.171126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.171489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.171846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.172207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.172621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.172637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.174832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.176281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.177811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.179230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.179478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.179846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.180207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.180563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.180918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.181216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.181232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.183754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.184976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.186456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.187924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.188354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.188739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.189096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.189455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.190225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.190508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.190523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.193224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.194678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.196144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.196967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.197384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.197751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.198108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.198471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.135 [2024-07-25 12:14:43.199821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.200070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.200085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.203161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.204617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.205864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.206228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.206625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.206989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.207352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.208763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.210060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.210313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.210330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.213284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.214828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.215199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.215558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.215955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.216328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.217361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.218575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.220035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.220288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.220304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.223286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.223851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.224212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.224569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.224975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.225590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.226803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.228265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.229738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.230009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.230025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.232430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.232794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.233154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.233520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.233916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.235380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.236905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.238348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.239657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.239957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.239972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.241789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.242164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.242531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.242935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.136 [2024-07-25 12:14:43.243210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.244662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.246178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.247446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.248725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.249016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.249031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.251026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.251407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.251831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.253115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.253368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.254820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.256369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.257289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.258542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.258797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.258813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.260995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.261365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.262869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.264428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.264676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.266149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.266869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.268096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.269544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.269793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.269808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.272201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.273555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.400 [2024-07-25 12:14:43.274794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.276257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.276505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.277219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.278534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.280006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.281468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.281717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.281732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.284927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.286161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.287620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.289081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.289410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.290989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.292482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.294057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.295498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.295870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.295885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.299152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.300627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.302085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.302898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.303155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.304379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.305848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.307305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.307668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.308076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.308093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.311528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.313009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.314081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.315567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.315859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.317336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.318805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.319320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.319680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.320058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.320074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.323421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.324831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.325978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.327207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.327457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.328947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.329827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.330203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.330560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.331003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.331019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.334055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.334756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.335987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.337495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.337749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.338917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.339277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.339633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.340004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.340433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.340450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.342530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.343809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.345268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.346725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.346973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.347349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.347706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.348062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.348421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.348677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.348692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.351529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.352856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.354365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.355946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.356291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.356661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.357016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.357377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.358514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.358798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.358817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.361504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.362952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.401 [2024-07-25 12:14:43.364405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.364889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.365344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.365710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.366068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.366765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.367988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.368241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.368257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.371194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.372650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.373581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.373949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.374364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.374728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.375085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.376561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.378111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.378364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.378380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.381201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.382493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.382937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.383298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.383683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.384048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.384723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.385946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.387424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.387673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.387688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.390670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.391565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.391943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.392303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.392760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.393129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.394604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.396182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.397610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.397996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.398012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.399888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.400253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.400610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.400971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.401290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.401660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.402016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.402375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.402732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.403068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.403084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.405606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.405974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.406344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.406704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.407075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.407448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.407810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.408183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.408545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.408949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.408964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.411433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.411793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.412156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.412516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.412853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.413231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.413592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.413946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.414323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.414757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.414775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.417365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.417732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.418103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.418488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.418909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.419279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.419636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.419994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.420367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.420740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.420755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.423268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.423630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.423986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.424352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.424755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.425118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.425486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.425845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.426207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.426664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.426680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.429152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.429513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.402 [2024-07-25 12:14:43.429874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.430246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.430606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.430971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.431334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.431692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.432050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.432431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.432447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.434986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.435359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.435407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.435764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.436181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.436544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.436900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.437272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.437636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.438100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.438116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.440609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.440976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.441339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.441382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.441776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.442149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.442514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.442871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.443236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.443641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.443656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.445832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.445876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.445913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.445950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.446384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.446438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.446477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.446515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.446551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.446986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.447002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.449911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.450346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.450363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.452686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.452728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.452766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.452803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.453779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.455983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.456731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.457095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.457112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.459974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.460302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.460318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.462638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.462702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.462740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.462797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.463787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.465927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.465969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.466008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.403 [2024-07-25 12:14:43.466045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.466383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.466440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.466480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.466518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.466561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.466999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.467015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.469930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.470376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.470392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.472514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.472556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.472593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.472637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.473677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.475907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.475948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.475985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.476022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.476435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.476480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.476519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.476558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.476599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.477038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.477054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.479856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.480252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.480269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.481916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.481959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.482642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.483036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.483052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.485986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.486340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.486360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.488985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.490380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.490423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.490460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.490498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.490892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.404 [2024-07-25 12:14:43.490937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.490977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.491015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.491054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.491460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.491479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.493962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.494003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.494252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.494268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.495734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.495774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.495811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.495847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.496836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.498818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.498865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.498902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.498938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.499606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.501690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.502110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.502126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.504838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.505081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.505097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.506632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.506694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.506731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.506768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.507506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.509830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.509892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.509933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.509975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.510658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.405 [2024-07-25 12:14:43.512990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.515954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.516205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.516221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.517793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.517839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.517876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.517912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.518586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.520726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.520769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.664 [2024-07-25 12:14:43.520807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.520847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.521696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.523991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.524006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.525948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.525990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.526984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.528998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.529034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.529283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.529298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.531923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.532301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.532317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.533727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.533775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.534876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.534920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.535656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.537759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.537801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.537838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.538201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.538530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.538579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.538617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.538654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.538690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.539005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.539020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.541803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.543264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.544718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.545085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.545503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.545872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.546237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.547289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.548522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.665 [2024-07-25 12:14:43.548774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.548791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.551729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.553220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.553693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.554051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.554431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.554794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.555542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.556772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.558223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.558473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.558488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.561515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.562256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.562620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.562980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.563444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.563811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.565201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.566673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.568124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.568379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.568395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.571223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.571585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.571946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.572310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.572727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.574114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.575413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.576877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.578382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.578838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.578854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.580637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.581003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.581370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.581730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.582013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.583249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.584686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.586168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.586857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.587105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.587120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.588969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.589336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.589694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.590620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.590906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.592391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.593853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.594910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.596398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.596686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.596702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.598802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.599175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.599794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.601015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.601275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.602821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.604202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.605355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.606578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.606829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.606845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.609094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.609465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.610863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.612375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.612626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.614100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.614935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.616160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.617621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.617874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.617889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.620351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.621899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.623464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.624964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.625220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.625918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.666 [2024-07-25 12:14:43.627133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.628608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.630071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.630404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.630420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.634235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.635627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.637157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.638674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.639055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.640276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.641703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.643162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.644154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.644542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.644558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.647832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.649309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.650760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.651471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.651754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.653301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.654764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.656003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.656368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.656777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.656794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.660084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.661552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.662229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.663512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.663765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.665252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.666846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.667214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.667576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.667927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.667942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.671213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.672239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.673802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.675260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.675511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.677003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.677450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.677808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.678169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.678611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.678627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.681639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.682697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.683926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.685385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.685637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.686632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.686995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.687360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.687723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.688128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.688150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.690288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.691517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.692987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.694465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.694741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.695115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.695481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.695838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.696203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.696454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.696469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.699535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.701110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.702619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.703979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.704316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.704685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.705045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.705410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.706542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.706832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.706848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.709574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.667 [2024-07-25 12:14:43.711058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.712527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.712891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.713328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.713695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.714052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.715025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.716273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.716526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.716542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.719448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.720916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.721539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.721899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.722344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.722728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.723107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.724434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.725884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.726134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.726155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.729228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.730220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.730580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.730937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.731409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.731773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.733216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.734743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.736208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.736458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.736473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.739324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.739690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.740049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.740413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.740835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.741922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.743160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.744518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.745214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.745470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.745485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.747689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.748052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.749014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.750235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.750486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.751962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.752915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.754251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.755465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.755717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.755733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.757934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.758306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.758670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.759034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.759480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.759847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.760210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.760571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.760936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.761285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.761301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.763828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.764198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.764555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.764912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.765333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.765699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.766060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.766435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.766794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.767190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.767206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.769658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.770048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.770418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.770788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.771174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.771547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.771907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.772270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.668 [2024-07-25 12:14:43.772634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.772992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.773008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.775700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.776068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.776437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.776796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.777165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.777533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.777922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.778303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.778661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.779078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.669 [2024-07-25 12:14:43.779097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.929 [2024-07-25 12:14:43.781732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.782110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.782478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.782838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.783329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.783706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.784065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.784429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.784785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.785153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.785169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.787698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.788063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.788433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.788793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.789213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.789576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.789956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.790340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.790705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.791161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.791179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.793618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.793990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.794356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.794713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.795128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.795500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.795863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.796232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.796600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.796965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.796980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.799499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.799864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.800239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.800603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.801057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.801443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.801806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.802172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.802534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.802897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.802913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.805506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.805887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.806260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.806621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.806965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.807342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.807704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.808066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.808430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.808848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.808865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.811356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.811720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.811764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.812150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.812583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.812949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.813322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.813696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.814052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.814510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.814527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.816972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.817344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.817706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.817758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.818197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.818568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.818929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.819291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.819652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.820025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.820040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.930 [2024-07-25 12:14:43.822920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.822958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.823008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.823383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.823400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.825688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.825730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.825771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.825809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.826681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.829757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.830152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.830169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.832428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.832470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.832508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.832546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.832917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.832974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.833018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.833073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.833113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.833544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.833561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.835579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.835635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.835682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.835720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.836577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.838915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.838959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.839992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.840009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.842791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.843034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.843049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.844582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.844630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.844676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.844717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.844961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.845013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.845051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.845088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.845125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.845415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.845433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.847775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.847827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.931 [2024-07-25 12:14:43.847866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.847903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.848591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.850893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.853772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.854048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.854063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.855546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.855596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.855637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.855679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.855927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.855970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.856034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.856071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.856108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.856360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.856377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.858434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.858478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.858517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.858555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.858958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.859006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.859054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.859091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.859128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.859390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.859407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.860913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.860953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.860991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.861754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.863805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.863850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.863887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.863924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.864804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.866256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.866298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.866339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.866381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.866630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.932 [2024-07-25 12:14:43.866674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.866719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.866763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.866803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.867053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.867068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.869682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.870107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.870123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.871548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.871590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.871634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.871674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.872515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.874930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.875333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.875350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.876859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.876901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.876941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.876978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.877717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.879990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.880028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.880451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.880467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.882984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.883000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.884534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.884577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.884620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.884670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.885712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.933 [2024-07-25 12:14:43.887631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.887681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.887722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.887759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.888477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.889952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.889993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.890631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.891051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.891067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.893802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.895994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.896049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.896515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.896532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.898520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.898569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.898626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.898666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.898914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.898967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.899008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.899045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.899082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.899333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.899349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.900967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.901861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.904900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.934 [2024-07-25 12:14:43.906504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.906550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.907994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.908839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.910881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.910930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.910968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.912472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.912722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.912770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.912833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.912870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.912907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.913155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.913180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.915617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.915999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.916364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.916731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.917127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.918665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.920097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.921664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.923171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.923560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.923574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.925415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.925779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.926147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.926506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.926755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.927978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.929448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.930911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.931593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.931841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.931858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.933783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.934156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.934517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.935573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.935864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.937341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.938800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.939756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.941322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.941573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.941589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.943778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.944152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.944881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.946110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.946369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.947850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.949146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.950392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.951625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.951873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.951888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.954324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.954762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.956035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.957513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.957762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.959331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.935 [2024-07-25 12:14:43.960371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.961605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.963072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.963331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.963347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.966023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.967256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.968713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.970191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.970441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.971452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.972674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.974128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.975590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.975917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.975939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.979820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.981326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.982772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.984053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.984370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.985578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.987023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.988467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.989389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.989751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.989766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.993026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.994520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.996041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.996959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.997259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:43.998743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.000205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.001286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.001649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.002068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.002084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.005432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.006902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.007595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.008826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.009073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.010638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.012022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.012394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.012753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.013126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.013149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.016368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.017202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.018650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.020188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.020438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.021911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.022285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.022644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.023003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.023434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.023450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.026187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.027579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.028882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.030337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.030587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.031233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.031593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.031950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.032315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.032691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.032706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.035053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.036293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.037754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.039230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.039556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.039936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.040302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.040665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.041121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.041378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.041393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:57.936 [2024-07-25 12:14:44.044220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.045781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.047250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.047757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.048219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.048588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.048950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.049479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.050707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.050957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.050972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.053963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.055443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.056292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.056662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.057091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.057469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.057830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.059395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.060981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.061241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.061257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.064287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.065454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.065821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.066187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.066581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.066948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.068303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.069562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.071029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.071285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.071300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.074361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.074735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.075099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.075472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.075894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.076997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.078199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.079628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.081096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.081467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.081483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.083279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.083645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.084004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.084373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.084720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.086196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.087663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.088737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.089876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.090126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.197 [2024-07-25 12:14:44.090146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.092665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.093048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.094376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.095874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.096124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.097724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.098377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.099589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.101071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.101328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.101344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.103791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.104165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.104529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.104905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.105332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.105702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.106064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.106428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.106792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.107225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.107244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.109789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.110169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.110532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.110891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.111281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.111653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.112021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.112387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.112754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.113197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.113217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.115759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.116121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.116490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.116854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.117222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.117592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.117951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.118316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.118676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.119018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.119034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.121517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.121883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.122253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.122615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.123013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.123393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.123755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.124117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.124486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.124908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.124924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.127518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.127883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.128254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.128614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.128977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.129362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.129733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.130092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.130458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.130863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.130878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.133458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.133827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.134213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.134584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.135019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.135396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.135756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.136114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.136487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.136833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.136848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.139372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.139740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.140101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.198 [2024-07-25 12:14:44.140467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.140883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.141257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.141619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.142680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.143346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.143761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.143777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.146069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.147277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.147791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.148157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.148411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.149101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.149466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.149825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.150194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.150443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.150460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.152793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.153165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.153530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.155028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.155471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.155839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.157426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.157793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.158160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.158568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.158583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.162363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.162736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.163099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.163472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.163832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.165317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.165679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.166244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.167380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.167786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.167802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.170041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.170972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.171765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.172126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.172451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.172821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.173831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.174536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.174898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.175179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.175194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.178237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.178948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.179312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.180591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.180953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.181330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.181691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.182052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.183420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.183809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.183825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.186354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.186721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.186765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.188361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.188755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.189125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.190555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.190919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.191287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.191671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.191691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.195399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.195770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.196131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.196199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.196553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.196923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.198368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.198724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.199 [2024-07-25 12:14:44.199289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.199538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.199554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.201831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.201873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.201910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.201947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.202843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.204726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.204768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.204805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.204842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.205651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.207683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.207724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.207761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.207797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.208662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.210854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.210897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.210949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.210999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.211995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.214936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.215194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.215209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.216779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.216825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.216862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.216899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.217592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.219469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.219512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.219551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.219602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.220622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.222087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.200 [2024-07-25 12:14:44.222137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.222933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.224751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.224796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.224834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.224872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.225803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.227997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.228294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.228310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.229963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.230683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.231159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.231176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.232817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.232867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.232911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.232948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.233620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.235969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.236310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.236326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.238630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.239018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.239033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.240511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.240555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.240594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.240631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.240982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.241037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.241075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.201 [2024-07-25 12:14:44.241113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.241157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.241566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.241582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.243992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.244029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.244275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.244291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.248698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.248746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.248784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.248822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.249781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.253731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.253780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.253824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.253871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.254552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.258903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.259154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.259170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.263764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.263811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.263852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.263903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.264912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.202 [2024-07-25 12:14:44.268962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.269214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.269230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.272954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.273841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.278860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.279230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.279246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.282913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.282959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.283753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.287724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.287787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.287827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.287865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.288812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.292600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.292646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.292683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.292727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.292974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.293020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.293058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.293111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.293153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.293397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.293411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.296826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.300689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.300736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.300779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.300818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.301874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.203 [2024-07-25 12:14:44.305855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.305891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.306134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.306154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.310865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.311127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.204 [2024-07-25 12:14:44.311154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.315777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.315832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.316814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.317257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.317274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.318887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.318929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.318966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.320689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.322821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.323188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.324107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.325354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.325613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.327099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.328099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.329669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.331112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.331365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.331380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.333663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.334472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.335708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.337194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.337441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.338628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.340015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.341307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.342778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.343024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.343040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.346117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.347338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.348794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.350268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.350568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.351834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.353056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.354514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.355978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.356328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.356344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.359771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.361234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.362676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.363700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.363950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.365187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.366640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.368095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.368620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.369037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.369052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.372583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.374052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.375289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.376648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.376959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.378445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.379903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.380424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.380866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.381266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.463 [2024-07-25 12:14:44.381283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.384763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.386288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.387174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.388398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.388647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.390123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.391274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.391635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.391991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.392410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.392437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.395569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.396274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.397500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.398955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.399211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.400551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.400909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.401269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.401625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.402025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.402042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.404185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.405452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.406900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.408367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.408617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.408991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.409352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.409709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.410068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.410324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.410339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.413526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.415091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.416589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.417948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.418311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.418679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.419040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.419401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.420689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.420975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.420990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.423831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.425315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.426887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.427252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.427647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.428010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.428371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.429363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.430576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.430824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.430839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.433834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.435298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.435699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.436057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.436424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.436785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.437532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.438759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.440229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.440477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.440492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.443549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.444210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.444568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.444924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.445359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.445899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.447117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.448595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.450059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.450312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.450328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.452682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.453062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.453423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.453780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.454178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.455602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.457107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.458572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.459675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.459956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.459971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.461926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.463192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.464 [2024-07-25 12:14:44.463625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.463982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.464236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.465463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.466919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.468380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.469107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.469389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.469405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.471404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.471772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.472130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.473640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.473933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.475445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.476906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.477415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.478908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.479160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.479176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.481238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.481614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.481974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.482362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.482700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.483059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.483420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.483777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.484144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.484497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.484512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.487081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.487452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.487812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.488172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.488551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.488915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.489280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.489644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.490003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.490406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.490427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.492947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.493313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.493672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.494031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.494390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.494775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.495135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.495495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.495851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.496257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.496283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.498816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.499184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.499560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.499924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.500334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.500698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.501054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.501420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.501782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.502163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.502179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.504591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.504956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.505318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.505675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.506086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.506459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.506822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.507192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.507552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.507938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.507954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.510406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.510769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.511125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.511492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.511835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.512211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.512569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.512929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.513291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.513753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.513769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.516330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.516699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.517065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.517428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.517798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.518163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.465 [2024-07-25 12:14:44.518523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.518889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.519255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.519672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.519691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.522171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.522533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.522891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.523251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.523564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.523937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.524302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.524658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.525016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.525439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.525456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.527877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.528244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.528610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.528973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.529428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.529796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.530175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.530533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.530900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.531275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.531290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.533955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.534326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.534686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.535041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.535474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.535840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.536209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.536576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.536935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.537297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.537313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.539793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.540157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.540515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.540879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.541213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.541582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.541938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.542298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.542655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.543007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.543023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.545488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.545850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.546216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.546588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.546998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.547368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.547725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.548088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.548458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.548895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.548911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.551438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.551820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.552181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.552538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.552926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.554393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.554759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.556160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.556521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.556923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.556941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.559450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.559811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.560172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.560530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.560873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.561247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.561607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.561962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.562338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.562756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.562772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.564892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.566261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.567714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.569177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.569425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.569797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.570157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.570512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.466 [2024-07-25 12:14:44.570868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.571147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.571163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.576125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.577640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.577702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.578075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.578489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.578859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.467 [2024-07-25 12:14:44.579223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.580431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.581680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.581939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.581955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.584961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.586471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.586830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.586871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.587300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.587665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.588049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.589533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.590940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.591196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.591211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.592752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.592803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.592839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.592877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.593588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.595847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.727 [2024-07-25 12:14:44.595890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.595931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.595968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.596665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.598944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.601722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.602028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.602043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.603530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.603571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.603613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.603650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.603905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.603954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.604000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.604042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.604079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.604328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.604343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.606979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.607019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.607268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.607284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.608769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.608809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.608846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.608882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.609598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.611423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.611471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.611519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.611557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.612564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.613979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.614802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.616509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.616552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.616591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.616630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.616986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.728 [2024-07-25 12:14:44.617031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.617068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.617105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.617148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.617552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.617572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.619781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.620030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.620046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.621579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.621620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.621661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.621698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.622707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.624914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.625247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.625262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.626653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.626698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.626736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.626774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.627735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.629781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.629821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.629866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.629906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.630582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.632708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.633130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.633153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.635949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.637982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.638019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.638384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.729 [2024-07-25 12:14:44.638399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.640587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.640627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.640664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.640704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.641452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.642984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.643819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.646979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.648995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.649241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.649256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.651471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.651514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.651552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.651590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.651915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.651963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.652001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.652037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.652074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.652363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.652378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.653861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.653902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.653949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.653988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.654672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.656700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.656742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.656780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.656819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.657679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.659993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.661899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.661941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.661993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.730 [2024-07-25 12:14:44.662031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.662991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.664933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.665187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.665202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.666943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.666984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.667992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.669484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.669524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.669561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.669598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.669962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.670013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.670065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.670105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.670146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.670396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.670411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.671941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.671981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.672898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.673305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.673321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.674737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.674778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.674815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.675513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.675764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.675818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.675861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.675898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.675936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.676185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.676201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.678240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.678602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.679870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.681077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.681336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.682822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.683619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.685080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.686667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.686915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.686931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.689251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.690010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.691225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.692703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.692952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.694255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.695494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.696717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.698184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.698433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.698449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.700973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.702319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.703792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.705270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.705522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.706395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.707624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.709091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.731 [2024-07-25 12:14:44.710565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.710921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.710944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.714891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.716360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.717946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.719404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.719773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.721006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.722472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.723940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.724849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.725277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.725293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.728521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.729985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.731441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.732133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.732406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.733971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.735442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.736732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.737090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.737501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.737518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.740789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.742261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.742978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.744353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.744606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.746086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.747622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.747987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.748352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.748732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.748747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.751949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.753063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.754352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.755587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.755839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.757306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.758056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.758417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.758773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.759208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.759224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.762199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.762964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.764199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.765669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.765920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.767184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.767542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.767897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.768262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.768670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.768686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.770903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.772350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.773891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.775353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.775604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.775979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.776341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.776698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.777053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.777350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.777366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.779955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.781183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.782649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.784115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.784489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.784863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.785225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.785580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.732 [2024-07-25 12:14:44.786019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.786275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.786290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.789277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.790536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.790895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.791256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.791640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.792004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.793415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.794849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.796432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.796684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.796699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.799784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.800153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.800511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.800870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.801293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.802471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.803699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.805200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.806666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.807069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.807085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.809177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.809551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.809908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.810268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.810682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.811052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.811426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.811783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.812142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.812541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.812559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.814944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.815308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.815668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.816030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.816416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.816782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.817137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.817497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.817866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.818219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.818235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.820810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.821180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.821544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.821900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.822335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.822705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.823067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.823432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.823791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.824187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.824203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.826585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.826948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.827310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.827668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.828067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.828443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.828802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.829161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.829520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.829852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.829868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.832312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.832676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.833040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.833404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.833820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.834189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.834547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.834905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.835272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.835719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.835737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.838208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.838588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.838945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.839304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.839714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.840096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.840473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.840832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.841197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.841633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.733 [2024-07-25 12:14:44.841656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.844075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.844452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.844819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.845186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.845631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.845996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.846361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.846719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.847082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.847447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.847463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.850094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.850471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.850831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.851190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.851603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.851969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.852343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.852702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.853061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.853445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.853460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.855904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.856278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.856636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.856999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.857369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.857739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.858095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.858465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.858824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.859223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.994 [2024-07-25 12:14:44.859239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.861739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.862107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.862476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.862836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.863205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.863569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.863930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.864301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.864660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.865073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.865090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.867408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.867770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.868128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.868491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.868909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.869283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.869645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.870002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.870364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.870777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.870793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.873213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.873591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.873952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.874327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.874694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.875062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.875424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.875779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.876146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.876580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.876596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.879212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.880780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.881227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.882480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.882851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.883221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.883578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.883933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.884299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.884625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.884641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.887287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.887652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.888016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.888376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.888802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.889169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.889542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.889904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.891289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.891702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.891718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.895191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.896784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.898248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.899575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.899859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.901095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.902552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.904019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.904796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.905237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.905254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.908438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.909903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.911390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.912249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.912549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.914045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.915513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.916591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.917993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.918402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.918422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.921760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.922994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.924456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.925922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.926305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.927627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.929099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.930566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.931690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.932074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.932090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.935401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.936868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.995 [2024-07-25 12:14:44.938328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.939026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.939281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.940725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.942289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.943771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.944737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.945040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.945055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.947814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.949045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.949088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.950547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.950798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.951885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.953378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.954748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.956250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.956501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.956517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.959279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.960505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.961958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.962001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.962258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.963531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.964823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.966056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.967527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.967778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.967794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.969902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.969945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.969982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.970861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.972951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.973201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.973217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.974716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.974756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.974794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.974831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.975799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.977968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.978012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.978372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.978387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.979789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.979838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.979876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.979917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.980779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.982847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.982889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.982930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.982967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.983220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.996 [2024-07-25 12:14:44.983271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.983321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.983361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.983397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.983642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.983658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.985944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.988757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.989049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.989064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.990542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.990583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.990628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.990666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.990916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.990968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.991006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.991051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.991090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.991338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.991353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.993681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.994102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.994117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.995804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.995845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.995882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.995919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.996742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.998838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.999260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:44.999276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.001920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.002225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.002242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.003646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.003686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.003723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.003760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.004108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.004165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.004204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.997 [2024-07-25 12:14:45.004241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.004277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.004527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.004543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.006647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.006689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.006728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.006767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.007547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.009835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.011768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.011809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.011848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.011886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.012728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.014782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.015028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.015043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.016611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.016663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.016706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.016744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.017593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.019978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.020279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.020295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.021686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.021726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.021763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.021800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.022745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.024670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.024712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.024749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.024786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.998 [2024-07-25 12:14:45.025461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.026947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.026988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.027772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.029707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.029750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.029793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.029831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.030673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.032978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.034887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.034941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.034978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.035936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.037851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.038098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.038113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.039607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.039650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.039691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.039728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.040609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.042397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.042437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.042474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.042511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.042753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:58.999 [2024-07-25 12:14:45.042806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.042844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.042881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.042917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.043164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.043179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.044670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.044714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.044761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.044802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.045608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.047682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.047723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.048997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.049727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.051231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.051271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.051308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.052336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.052588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.052637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.052675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.052711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.052758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.053181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.053198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.056984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.058555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.059486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.060714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.060961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.062450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.063551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.063906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.064265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.064703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.064719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.067843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.068527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.069815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.071262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.071511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.073040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.073838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.074772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.075129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.075430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.075445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.080306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.081847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.083288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.084846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.085094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.085574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.085931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.086292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.086648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.086970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.086985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.089333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.090564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.092025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.093485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.093807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.095285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.095648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.096014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.097377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.097797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.097813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.102606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.104076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.000 [2024-07-25 12:14:45.105544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.106454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.106908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.107287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.107656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.108156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.109410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.109667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.001 [2024-07-25 12:14:45.109690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.112666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.114126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.114734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.116043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.116477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.116842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.118224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.118586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.119403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.119713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.119729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.259 [2024-07-25 12:14:45.125304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.126051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.126411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.126767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.127226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.127643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.128940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.130405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.131861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.132110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.132125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.134878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.136097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.136594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.136951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.137207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.137787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.138151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.139730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.141259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.141510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.141525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.145983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.146352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.147535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.148775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.149028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.150511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.151323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.152761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.154295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.154544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.154559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.156888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.157978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.158343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.159446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.159731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.161236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.162696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.163572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.165083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.165342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.165358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.169666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.170897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.172392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.173859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.174121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.175037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.176272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.177762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.179205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.179496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.179511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.182221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.182587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.182952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.183329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.183763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.184129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.184491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.184848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.185233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.185591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.185606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.190455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.190825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.191189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.191546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.191958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.192331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.192695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.193803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.194414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.194840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.194856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.197111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.197480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.197838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.198201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.198611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.198976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.199346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.200785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.201158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.201574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.201595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.260 [2024-07-25 12:14:45.206159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.206525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.206886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.207265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.207646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.209046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.209417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.210213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.211133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.211530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.211546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.213967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.214338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.214699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.215060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.215338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.216222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.216580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.217795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.218311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.218723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.218739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.221904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.222294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.223895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.224264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.224680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.226252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.226617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.226977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.227366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.227737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.227753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.230422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.230912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.232137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.232499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.232849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.234159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.234515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.234873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.235242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.235616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.235632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.239381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.239749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.240876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.241461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.241864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.242238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.242601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.242961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.243322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.243727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.243745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.246321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.246690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.248169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.248536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.248942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.249319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.249688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.250047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.250415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.250851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.250868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.256063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.256437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.256802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.257170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.257564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.257932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.258296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.258653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.259013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.259357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.259376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.262258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.262620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.262981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.263349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.263734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.264099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.264462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.261 [2024-07-25 12:14:45.264820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.265186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.265529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.265545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.270801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.271181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.271555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.271918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.272379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.272749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.273110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.273557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.274831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.275280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.275297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.277659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.278028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.278397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.278753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.279186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.279552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.279917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.280461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.281616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.282023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.282040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.286283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.286650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.287007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.287368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.287782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.288151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.288515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.289727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.290239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.290653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.290669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.292906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.294091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.294625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.294994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.295248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.295831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.296194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.296559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.296921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.297176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.297192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.300912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.302406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.303995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.304991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.305299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.306767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.308233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.309296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.310655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.311039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.311054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.314176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.315413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.316866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.318332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.318696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.320154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.321699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.323158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.324447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.324766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.324780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.329734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.331197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.332647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.333333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.333584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.334865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.336325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.337852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.338626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.338880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.338895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.341502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.342730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.344196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.345674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.345956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.347156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.262 [2024-07-25 12:14:45.348384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.349850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.351324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.351703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.351718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.357939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.359427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.359473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.360923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.361181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.362288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.363500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.364960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.366426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.366783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.366798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.368994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.369362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.370704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.370749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.370998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.372554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.374050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.375214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.263 [2024-07-25 12:14:45.376451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.376752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.376782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.380886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.381009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.381048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.381387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.381403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.382797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.382840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.382878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.382916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.383601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.522 [2024-07-25 12:14:45.388747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.388784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.388821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.389224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.389241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.390662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.390703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.390739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.390776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.391673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.395804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.395851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.395893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.395931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.396786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.398887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.399170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.399185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.402837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.402885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.402925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.402964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.403787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.405619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.405661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.405701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.405738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.405982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.406034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.406071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.406109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.406152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.406481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.406496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.411976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.413826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.413867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.413904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.413940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.414624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.418471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.418520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.418558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.418600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.419028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.419074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.419113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.419157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.523 [2024-07-25 12:14:45.419196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.419577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.419592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.421900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.422149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.422165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.426725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.427124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.427146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.428964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.429782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.434924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.435306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.435322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.437944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.438193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.438208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.442962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.443000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.443037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.443287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.443302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.445920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.446203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.446218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.450644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.450694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.450731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.450776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.451506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.453327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.453369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.524 [2024-07-25 12:14:45.453410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.453450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.453849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.453894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.453932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.453971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.454011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.454261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.454277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.457801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.457850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.457888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.457925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.458665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.460510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.460552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.460596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.460635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.461591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.465636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.465682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.465719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.465756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.465998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.466049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.466087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.466123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.466165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.466413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.466428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.468972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.469010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.469431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.469448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.473759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.473806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.473850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.473887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.474583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.476978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.477016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.477452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.477468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.481943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.483669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.483711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.483756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.483793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.484155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.484213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.525 [2024-07-25 12:14:45.484250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.484287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.484323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.484627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.484642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.487991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.488814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.490986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.491024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.491281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.491297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.494661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.494708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.495903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.496153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.496180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.499822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.499868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.499914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.500277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.500670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.500720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.500757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.500794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.500831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.501123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.501145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.506668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.507507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.509126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.509490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.509905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.511450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.511818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.512347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.513556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.513808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.513823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.519272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.520518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.520991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.521353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.521606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.522188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.522546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.524085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.525658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.525911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.525926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.531277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.532088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.532455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.533758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.534130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.534507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.535891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.537166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.538623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.538873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.538889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.543664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.544031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.545122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.545754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.546173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.547379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.548596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.550048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.551514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.551881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.526 [2024-07-25 12:14:45.551896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.556519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.557407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.558234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.558593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.558866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.560091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.561552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.563016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.563706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.563955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.563970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.568294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.569414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.569773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.570885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.571179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.572655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.574128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.574984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.576497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.576747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.576762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.582677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.583043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.583658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.585103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.585364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.586688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.587592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.589040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.590493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.590831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.590847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.597383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.598844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.600307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.601406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.601660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.602880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.604345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.605813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.606382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.606633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.606648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.610487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.612047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.613542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.614613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.614904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.616392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.617825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.618802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.620223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.620637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.620652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.624441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.625983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.626581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.627799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.628053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.629588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.630937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.631683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.632660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.633066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.633083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.636671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.527 [2024-07-25 12:14:45.637054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.638561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.638962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.639353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.639745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.640500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.641419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.641778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.642069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.642084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.645470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.646625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.647191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.647548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.647876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.648253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.649607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.649979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.650347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.650601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.650617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.653598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.655041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.655403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.655763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.656186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.656645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.657915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.658275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.659071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.659335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.659352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.663267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.664183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.664544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.664904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.665218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.666150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.666943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.667304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.668553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.668926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.668941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.674165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.674538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.674900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.675269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.675738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.677338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.677700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.678160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.679444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.679881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.679897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.684891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.685267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.685630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.685990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.686341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.687495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.687852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.688772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.689556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.689971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.689987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.787 [2024-07-25 12:14:45.693913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.694287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.694653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.695013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.695278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.696022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.696384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.697704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.698105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.698532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.698549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.702035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.702407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.702768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.703130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.703395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.704132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.704497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.705766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.706218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.706631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.706647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.710004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.710376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.710738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.711100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.711363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.711988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.712351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.713753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.714117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.714536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.714556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.717736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.718106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.718486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.718849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.719102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.719697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.720055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.721468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.721838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.722260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.722277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.725400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.725767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.726153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.726516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.726768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.727351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.727710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.729196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.729562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.729973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.729988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.733039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.733411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.733778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.734148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.734400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.734977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.735341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.736786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.737157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.737572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.737588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.740595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.740962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.741340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.741704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.741956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.742469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.742828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.744323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.744687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.745089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.745105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.748146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.748517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.749669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.750227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.750479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.751211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.752757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.753115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.753601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.753855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.753870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.756828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.758321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.758680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.759039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.788 [2024-07-25 12:14:45.759413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.759794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.761134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.761496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.762241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.762498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.762514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.766268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.767299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.767658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.768018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.768374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.769336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.770085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.770449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.771746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.772103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.772120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.776990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.777656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.779080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.780540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.780863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.781992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.782562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.782919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.784438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.784874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.784890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.789553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.791041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.792514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.793589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.793848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.794659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.795017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.796303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.796717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.797116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.797134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.803133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.804630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.805982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.807086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.807431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.807804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.808863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.809521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.809880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.810130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.810151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.814916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.816409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.817156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.818125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.818535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.819201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.820255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.820612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.821755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.822041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.822056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.827926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.828501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.828546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.829714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.830145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.830596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.831856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.832253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.833363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.833650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.833665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.839549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.840126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.841431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.841472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.841914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.842428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.843630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.843986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.844999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.845321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.845337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.849706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.849763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.849801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.849860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.850107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.850159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.850205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.850243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.850282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.789 [2024-07-25 12:14:45.850597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.850612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.854812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.855057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.855072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.859649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.859696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.859735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.859772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.860636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.863965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.868454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.868506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.868544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.868581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.868994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.869044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.869083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.869121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.869165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.869449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.869464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.873769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.874057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.874072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.878962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.879000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.879440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.879461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.883944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.887516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.887562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.887599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.887636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.888470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.891882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.790 [2024-07-25 12:14:45.891928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.891980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.892713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.897892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.898145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.898161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.902436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.902489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.902532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.902573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.902953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.903007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.903044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.903081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.903118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.903424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:59.791 [2024-07-25 12:14:45.903440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.905811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.905864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.905902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.905942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.906198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.906258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.050 [2024-07-25 12:14:45.906304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.906345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.906382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.906628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.906645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.910918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.910965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.911606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.912048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.912073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.916755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.917001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.917016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.920695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.920741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.920785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.920830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.921520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.925995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.926724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.927074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.927089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.930957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.931001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.931039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.931291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.931306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.935784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.936033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.936048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.940992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.941425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.941442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.944837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.944883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.944921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.944957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.945289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.945347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.945388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.945425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.945466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.051 [2024-07-25 12:14:45.945737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.945752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.949414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.949460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.949499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.949536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.949955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.950000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.950047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.950085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.950121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.950378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.950393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.954899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.955360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.955376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.958541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.958588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.958628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.958666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.958913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.958965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.959006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.959044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.959080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.959435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.959450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.963909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.963956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.963995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.964034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.964438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.964487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.964527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.964565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.964602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.965005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.965020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.969794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.970040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.970054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.973778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.974020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.974035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.977898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.977945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.977986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.978980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.982935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.983184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.983200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.052 [2024-07-25 12:14:45.987311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.987885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.988129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.988150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.992580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.992626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.992987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.993988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.994004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.997359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.997408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.997446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.998680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.998932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.999002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.999045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.999082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.999120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.999369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:45.999388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.004047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.005523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.006971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.007696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.007978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.009553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.011027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.012301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.012657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.013059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.013075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.017819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.019200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.020620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.020985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.021396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.021761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.022118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.022932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.024162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.024411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.024426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.029513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.031093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.031464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.031909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.032163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.032534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.032942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.034229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.035684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.035931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.035946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.040812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.041186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.041562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.041921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.042177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.043391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.044853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.046318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.047103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.047410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.047426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.051394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.052613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.053845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.055335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.055584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.056389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.057868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.059218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.060701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.060951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.060966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.064391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.064759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.065117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.053 [2024-07-25 12:14:46.065476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.065798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.066180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.066541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.066897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.067258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.067665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.067681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.070886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.071268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.071635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.071989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.072442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.072808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.073174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.073537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.073896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.074296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.074312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.077496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.077869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.078241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.078599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.078997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.079366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.079724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.080082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.080454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.080802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.080818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.083970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.084339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.084701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.085084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.085445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.085814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.086175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.086555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.086914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.087255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.087271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.090553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.090920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.091283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.091640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.092045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.092434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.092799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.093165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.093521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.093928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.093945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.097083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.097457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.097818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.098180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.098601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.098962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.099325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.099686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.100046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.100467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.100486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.103824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.104200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.104565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.104929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.105371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.105736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.106092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.106454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.106824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.107188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.107204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.110400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.110765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.111123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.111486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.111821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.112196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.112556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.112911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.113271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.113604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.113620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.116919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.117307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.117666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.118022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.118437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.118807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.119175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.054 [2024-07-25 12:14:46.119534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.119895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.120322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.120339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.123496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.123871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.124240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.124596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.124978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.125348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.125708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.126070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.126433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.126850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.126866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.129651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.130450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.130811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.131172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.131621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.131983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.132369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.132731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.133092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.133516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.133533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.136109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.136474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.136831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.137196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.137593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.137968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.138335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.138690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.139046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.139404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.139420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.141965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.143173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.144399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.145853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.146099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.146970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.148462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.150027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.151504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.151754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.151769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.154888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.156124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.157583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.159051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.159383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.160803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.162163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.163631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.165165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.165565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.055 [2024-07-25 12:14:46.165586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.168953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.170447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.171927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.172706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.173024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.174500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.175973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.177196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.177569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.177986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.178002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.181360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.182823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.183529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.184758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.185009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.186591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.188027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.188388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.188744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.189135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.189155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.192302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.193222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.194797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.196339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.196590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.198061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.198447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.198804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.199166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.199574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.199592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.202478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.203680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.204917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.206360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.206608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.207499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.207872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.208230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.208586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.208992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.209007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.211074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.212322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.213778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.215250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.215506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.215877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.216239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.216595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.216953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.217207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.217223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.220301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.221799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.223383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.224838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.225227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.225592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.225948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.226311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.227483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.227767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.227786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.230478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.231930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.233408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.316 [2024-07-25 12:14:46.233774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.234198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.234561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.234918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.235732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.236952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.237206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.237221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.240160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.241622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.242370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.242728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.243135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.243515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.243904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.245214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.246672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.246920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.246935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.249931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.250984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.251349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.251708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.252116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.252484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.254049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.255585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.257135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.257388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.257404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.260412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.260772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.261128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.261488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.261904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.263039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.264273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.265725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.267174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.267537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.267552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.269474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.269836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.270199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.270558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.270889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.272134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.273599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.275067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.275974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.276234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.276250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.278131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.278502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.278543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.278899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.279256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.280492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.281952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.283420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.284430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.284682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.284698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.286530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.286890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.287256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.287300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.287704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.289133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.290623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.292085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.293312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.293593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.293608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.295824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.296209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.296226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.317 [2024-07-25 12:14:46.298588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.298625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.298662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.298972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.298987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.300991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.301028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.301436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.301452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.303860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.304102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.304117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.305677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.305727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.305764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.305801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.306650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.308700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.308750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.308788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.308839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.309521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.311956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.314862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.315107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.315122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.316654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.316694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.316737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.316779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.317464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.319629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.319670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.319709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.319747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.319992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.320039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.320083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.320127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.320193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.320440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.318 [2024-07-25 12:14:46.320456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.322828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.324980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.325976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.327990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.328036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.328286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.328302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.330984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.331234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.331250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.332775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.332820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.332857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.332894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.333577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.335392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.335438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.335480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.335519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.335949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.335997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.336036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.336074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.336112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.336520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.336535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.337944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.337995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.338824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.340508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.340550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.340589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.340628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.341612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.343143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.343184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.343221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.319 [2024-07-25 12:14:46.343258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.343987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.345498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.345540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.345581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.345618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.346580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.348846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.349226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.349241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.350649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.350692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.350731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.350768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.351734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.353667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.353709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.353746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.353797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.354471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.355941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.355982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.356606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.357030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.357046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.359881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.361412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.320 [2024-07-25 12:14:46.361470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.361509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.361546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.361977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.362023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.362062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.362100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.362137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.362490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.362506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.364819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.365163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.365179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.366597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.366637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.366678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.366723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.367636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.369982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.370019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.370059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.370308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.370323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.371900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.371945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.371982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.372866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.374932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.374972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.376869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.377112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.377127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.378631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.378672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.378709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.379566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.379968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.380018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.380057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.380094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.380133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.380555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.380571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.383918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.385388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.385821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.387234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.387483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.388981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.390471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.390848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.391213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.391597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.391613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.394125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.394490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.394848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.395214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.395538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.321 [2024-07-25 12:14:46.395907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.396268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.396624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.396980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.397344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.397361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.399856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.400228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.400592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.400950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.401349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.401713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.402075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.402446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.402808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.403229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.403248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.405725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.406091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.406453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.406810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.407182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.407552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.407911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.408271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.408627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.409043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.409061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.411508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.411868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.412234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.412597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.413015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.413387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.413744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.414101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.414470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.414789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.414805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.417588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.417955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.418322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.418678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.419133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.419504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.419881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.420263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.420622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.421003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.421018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.423518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.423877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.424239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.424600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.424956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.425330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.425688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.426043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.426403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.426737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.426752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.429232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.429610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.429990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.430362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.430766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.322 [2024-07-25 12:14:46.431137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.431549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.431949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.432326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.432743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.432759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.435215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.435581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.435937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.436297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.436657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.437026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.437391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.437747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.438102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.438512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.438529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.441018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.582 [2024-07-25 12:14:46.441387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.441751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.442117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.442551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.442914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.443279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.443661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.444030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.444418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.444434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.446891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.447258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.447616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.447971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.448377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.448744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.449106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.449467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.449823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.450217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.450238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.452643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.453008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.453370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.453734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.454069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.454442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.454800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.455160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.455523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.455771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.455786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.458122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.458490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.458850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.459215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.459579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.459942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.460304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.460662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.461020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.461340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.461356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.464031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.464402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.464761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.465118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.465540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.465904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.466270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.466642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.467004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.467429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.467445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.470090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.470459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.470820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.471424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.471672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.473165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.474705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.476123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.477244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.477525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.477540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.479461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.479823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.480186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.481750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.482008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.483473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.484931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.485612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.486855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.487105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.487120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.489249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.489612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.490504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.491726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.491974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.493474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.494598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.495994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.497289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.497536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.497551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.499958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.500396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.501662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.503116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.583 [2024-07-25 12:14:46.503369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.504954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.505905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.507118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.508580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.508829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.508844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.511210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.512491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.513720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.515193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.515442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.516191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.517561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.519019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.520462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.520709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.520724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.523722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.524972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.526444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.527910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.528253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.529669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.530958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.532417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.533955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.534337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.534355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.537966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.539442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.540913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.542087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.542368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.543608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.545058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.546521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.547172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.547625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.547641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.550865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.552308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.553797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.554606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.554924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.556407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.557870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.559079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.559443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.559860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.559876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.563171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.564632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.565479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.566985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.567238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.568714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.570172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.570534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.570890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.571259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.571274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.574586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.576001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.577158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.578352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.578602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.580081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.580995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.581377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.581741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.582149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.582167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.585224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.585906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.587188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.588668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.588916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.590494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.590855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.591216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.591571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.591979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.591995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.594613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.595985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.597265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.598707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.598954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.599653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.600016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.600380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.600739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.584 [2024-07-25 12:14:46.601158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.601173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.603220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.604442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.605918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.607379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.607639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.608009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.608371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.608726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.609084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.609337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.609353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.612269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.613643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.615153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.616711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.617042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.617415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.617781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.618145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.619131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.619442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.619457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.622143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.623623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.625098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.625707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.626145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.626510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.626870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.627396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.628623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.628872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.628887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.631906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.633366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.634491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.634851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.635271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.635638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.635998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.637297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.638530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.638779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.638794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.641740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.643209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.643253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.643616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.644033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.644402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.644760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.645744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.646962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.647215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.647230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.650146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.651618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.652316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.652368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.652832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.653202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.653566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.654000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.655266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.655516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.655531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.657907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.659992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.660980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.585 [2024-07-25 12:14:46.662480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.662521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.662558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.662594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.662896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.662947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.662985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.663021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.663058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.663303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.663319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.665432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.665489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.665528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.665566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.665984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.666030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.666068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.666107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.666148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.666478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.666496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.667900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.667942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.667980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.668707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.670971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.671008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.671045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.671460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.671478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.672912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.672953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.672990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.673848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.675972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.676011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.676048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.676407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.676423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.678690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.679032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.679047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.680446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.680498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.680537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.680575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.680945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.680993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.681032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.681070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.681107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.681519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.681535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.683400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.683440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.683477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.683520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.586 [2024-07-25 12:14:46.683767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.683814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.683852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.683908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.683946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.684196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.684211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.685736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.685776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.685817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.685854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.686776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.688780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.688830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.688867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.688913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.689592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.691655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.692028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.692043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.694941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.695209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.695235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.696753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.696796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.696836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.696887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.587 [2024-07-25 12:14:46.697606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.699836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.699889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.699932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.699970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.700660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.702797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.703266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.703282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.705819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.706065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.706080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.707621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.707663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.707703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.707740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.707985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.708037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.708074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.708111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.708155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.708550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.708579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.710795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.710837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.710874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.710912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.711652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.848 [2024-07-25 12:14:46.713950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.713965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.716757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.717007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.717022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.718993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.719030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.719066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.719315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.719330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.721416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.721456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.721495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.721533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.721945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.721994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.722032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.722073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.722110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.722364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.722379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.723865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.723906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.723943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.723979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.724688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.726494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.726535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.726588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.726626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.727615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.729991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.730005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.731597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.731639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.731676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.731716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.732704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.734891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.734933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.734971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.735946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.738051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.849 [2024-07-25 12:14:46.738094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.738462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.738510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.738927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.738976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.739015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.739054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.739105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.739440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.739456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.741617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.741659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.741697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.742989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.745552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.745920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.746282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.746639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.747068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.747440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.747803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.748168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.748527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.748890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.748905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.751302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.751664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.752023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.752387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.752704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.753073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.753439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.753797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.754158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.754480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.754495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.756974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.757343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.757724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.758091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.758519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.758889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.759253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.759616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.759977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.760427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.760444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.762934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.763305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.763665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.764031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.764459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.764826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.765193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.765565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.765922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.766379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.766395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.768854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.769226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.769587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.769957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.770356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.770725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.771082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.771441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.771800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.772201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.772217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.774736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.775113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.775485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.775848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.776221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.776586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.776945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.777315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.777678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.778102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.778118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.780564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.780928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.781289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.781648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.781972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.782349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.850 [2024-07-25 12:14:46.782714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.783074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.783439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.783841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.783859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.786271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.786635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.786995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.787364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.787751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.788118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.788485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.788843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.789214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.789586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.789601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.792447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.792818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.793193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.793552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.793994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.794367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.794731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.795092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.795456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.795849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.795866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.798307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.798671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.799029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.799395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.799646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.800026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.801254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.801926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.802288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.802708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.802725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.805155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.805520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.805881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.806244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.806592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.806962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.807327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.807682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.808054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.808469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.808486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.810945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.811318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.811692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.812054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.812476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.812843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.813208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.813566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.813932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.814295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.814312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.818485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.820013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.821527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.822908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.823215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.824454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.825913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.827386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.828216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.828620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.828635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.831914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.833376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.834823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.835500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.835752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.837186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.838744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.840177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.851 [2024-07-25 12:14:46.840537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.840938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.840956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.844309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.845775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.846666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.848177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.848429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.849897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.851368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.851732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.852090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.852474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.852492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.855802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.857289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.858384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.859622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.859875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.861366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.862276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.862651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.863014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.863423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.863441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.866512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.867223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.868460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.869874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.870135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.871571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.871934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.872332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.872697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.873106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.873123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.875568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.877148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.878602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.880179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.880433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.880899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.881264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.881624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.881983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.882358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.882375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.884823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.886068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.887549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.889016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.889449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.889833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.890200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.890562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.891109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.891367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.891384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.894078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.895540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.897014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.898016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.898397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.898766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.899126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.899493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.900942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.901256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.901272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.904212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.905786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.907221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.907580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.907978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.908354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.908714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.909650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.910878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.911130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.911153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.852 [2024-07-25 12:14:46.914061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.915535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.916205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.916568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.916980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.917353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.917713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.919173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.920695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.920946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.920965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.923992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.925087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.925451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.925811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.926201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.926569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.927865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.929090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.930558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.930808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.930822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.933757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.934135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.934502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.934861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.935288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.935852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.937087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.938558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.940033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.940314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.940330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.942597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.942967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.943340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.943700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.944118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.945720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.947232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.948796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.950194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.950506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.950521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.952261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.952623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.952983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.953348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.953636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.954857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.956334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.957803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.958488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.958740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.958756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.960676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.961050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.961416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.962428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:00.853 [2024-07-25 12:14:46.962722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.964299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.965771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.966490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.967845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.968098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.968113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.970210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.970579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.971246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.972484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.972737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.974235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.975482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.976868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.978152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.978406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.978421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.980833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.981289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.982529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.983994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.984253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.985757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.114 [2024-07-25 12:14:46.986827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.988053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.989522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.989774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.989789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.992226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.993684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.995024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.996493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.996745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.997480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:46.998715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.000177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.001629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.001880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.001896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.005028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.006271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.006316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.007761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.008012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.008823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.010238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.011758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.013212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.013461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.013476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.016179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.017394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.018869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.018912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.019170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.020375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.021751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.023013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.024490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.024742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.024757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.026978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.027911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.029939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.030191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.030206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.032845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.033091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.033105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.034611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.034657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.034694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.034731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.034978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.035030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.035072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.035109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.035151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.035402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.035417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.037912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.038314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.115 [2024-07-25 12:14:47.038330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.039729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.039771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.039814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.039854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.040650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.042989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.043032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.043069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.043493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.043510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.045876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.047447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.047500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.047537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.047574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.047973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.048022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.048061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.048098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.048135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.048478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.048494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.050770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.051133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.051157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.052616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.052666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.052704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.052742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.053696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.055569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.055610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.055650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.055687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.055933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.055983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.056021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.056058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.056095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.056442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.056458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.058752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.059126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.059146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.060805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.060846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.060883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.116 [2024-07-25 12:14:47.060920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.061672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.063840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.064248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.064268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.066822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.067068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.067083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.068567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.068609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.068646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.068683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.069645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.071600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.071645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.071682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.071720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.071966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.072017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.072055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.072092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.072132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.072386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.072402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.073956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.073998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.074896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.076958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.077793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.079941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.080278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.080299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.082989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.117 [2024-07-25 12:14:47.083031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.083067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.083321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.083336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.084945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.084987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.085732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.087827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.087871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.087912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.087954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.088933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.091870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.092201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.092217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.094457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.094500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.094538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.094587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.094917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.094974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.095024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.095064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.095147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.095568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.095583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.097817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.097888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.097929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.097967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.098902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.101802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.102234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.102251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.104448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.104492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.104551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.104602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.104929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.104987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.105026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.105063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.105100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.105507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.105524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.107665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.107709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.107746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.107784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.108748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.118 [2024-07-25 12:14:47.110998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.111043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.111412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.111460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.111858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.111908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.111963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.112003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.112041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.112485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.112502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.114686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.114729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.114767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.115120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.115529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.115581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.115620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.115663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.115701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.116012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.116028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.118503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.118867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.119238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.119601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.120017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.120392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.120752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.121110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.121476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.121841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.121856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.124387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.124754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.125114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.125478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.125885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.126260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.126623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.126982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.127348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.127702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.127718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.130169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.130538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.130899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.131265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.131587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.131956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.132322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.132680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.133038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.133365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.133381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.135834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.136205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.136567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.136928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.137348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.137718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.138078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.138442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.138804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.139174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.139191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.141719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.142089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.142456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.142823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.143232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.119 [2024-07-25 12:14:47.143601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.143965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.144337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.144698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.145173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.145189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.147696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.148062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.148432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.148795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.149115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.149488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.149848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.150211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.150570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.150890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.150906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.153559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.153929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.154309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.154670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.155112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.155484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.157060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.157495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.158767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.159113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.159129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.161600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.161972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.162344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.162705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.163068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.163447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.163814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.164181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.164543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.164942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.164962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.167346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.167714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.168082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.168453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.168879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.169253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.169613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.169971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.170339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.170716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.170731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.173222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.173589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.173950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.174316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.174564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.175793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.177248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.178670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.179425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.179718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.179733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.181701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.182066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.182639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.183872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.184120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.185703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.187133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.188257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.189489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.189738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.189753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.192085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.193087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.194327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.195794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.196043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.197088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.198645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.200072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.201655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.201904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.201919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.205513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.206831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.208316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.209891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.210238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.211473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.120 [2024-07-25 12:14:47.212947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.214411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.215435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.215811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.215827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.219314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.220779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.222004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.223332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.223628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.225126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.226617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.227215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.227576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.227920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.121 [2024-07-25 12:14:47.227943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.231246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.231948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.233188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.234660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.234911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.236245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.236606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.236964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.237333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.237713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.237729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.240104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.241353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.242834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.244320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.244770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.245165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.245527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.245885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.247188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.247474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.247489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.250311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.251791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.253366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.253733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.254146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.254513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.254871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.256417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.257867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.258118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.258133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.261183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.262554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.262916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.263281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.263678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.264048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.265420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.266899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.268354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.268601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.378 [2024-07-25 12:14:47.268617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.271281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.271648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.272010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.272375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.272726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.273958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.275439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.276873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.277882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.278133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.278155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.280050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.280433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.280794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.281975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.282271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.283728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.285183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.286041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.287555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.287816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.287832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.290022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.290396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.291591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.292808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.293058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.294540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.295372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.296186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.379 [2024-07-25 12:14:47.296598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:01.977 00:33:01.977 Latency(us) 00:33:01.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.977 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x0 length 0x100 00:33:01.977 crypto_ram : 5.76 44.43 2.78 0.00 0.00 2788078.39 296956.72 2214592.51 00:33:01.977 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x100 length 0x100 00:33:01.977 crypto_ram : 5.84 43.86 2.74 0.00 0.00 2838960.54 255013.68 2375653.79 00:33:01.977 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x0 length 0x100 00:33:01.977 crypto_ram1 : 5.76 44.42 2.78 0.00 0.00 2701256.29 296956.72 2053531.24 00:33:01.977 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x100 length 0x100 00:33:01.977 crypto_ram1 : 5.84 43.85 2.74 0.00 0.00 2748868.20 255013.68 2201170.74 00:33:01.977 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x0 length 0x100 00:33:01.977 crypto_ram2 : 5.54 309.13 19.32 0.00 0.00 374153.84 28730.98 590558.00 00:33:01.977 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x100 length 0x100 00:33:01.977 crypto_ram2 : 5.57 290.94 18.18 0.00 0.00 396630.44 57881.40 600624.33 00:33:01.977 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x0 length 0x100 00:33:01.977 crypto_ram3 : 5.61 318.97 19.94 0.00 0.00 354022.21 57461.96 449629.39 00:33:01.977 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:01.977 Verification LBA range: start 0x100 length 0x100 00:33:01.977 crypto_ram3 : 5.64 298.85 18.68 0.00 0.00 375496.71 10171.19 343932.93 00:33:01.977 =================================================================================================================== 00:33:01.977 Total : 1394.45 87.15 0.00 0.00 687604.86 10171.19 2375653.79 00:33:01.977 00:33:01.977 real 0m8.877s 00:33:01.977 user 0m16.857s 00:33:01.977 sys 0m0.440s 00:33:01.977 12:14:48 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:02.236 12:14:48 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:33:02.236 ************************************ 00:33:02.236 END TEST bdev_verify_big_io 00:33:02.236 ************************************ 00:33:02.236 12:14:48 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:02.236 12:14:48 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:02.236 12:14:48 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:02.236 12:14:48 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:02.236 ************************************ 00:33:02.236 START TEST bdev_write_zeroes 00:33:02.236 ************************************ 00:33:02.236 12:14:48 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:02.236 [2024-07-25 12:14:48.242674] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:02.236 [2024-07-25 12:14:48.242731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148532 ] 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:02.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.236 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:02.495 [2024-07-25 12:14:48.372539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.495 [2024-07-25 12:14:48.456055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.495 [2024-07-25 12:14:48.477305] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:02.495 [2024-07-25 12:14:48.485327] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:02.495 [2024-07-25 12:14:48.493346] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:02.495 [2024-07-25 12:14:48.611134] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:05.026 [2024-07-25 12:14:50.775064] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:05.026 [2024-07-25 12:14:50.775127] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:05.026 [2024-07-25 12:14:50.775146] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.026 [2024-07-25 12:14:50.783082] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:05.026 [2024-07-25 12:14:50.783099] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:05.026 [2024-07-25 12:14:50.783110] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.026 [2024-07-25 12:14:50.791104] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:05.026 [2024-07-25 12:14:50.791119] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:05.026 [2024-07-25 12:14:50.791130] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.026 [2024-07-25 12:14:50.799123] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:05.026 [2024-07-25 12:14:50.799144] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:05.026 [2024-07-25 12:14:50.799159] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.026 Running I/O for 1 seconds... 00:33:05.960 00:33:05.960 Latency(us) 00:33:05.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.960 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:05.960 crypto_ram : 1.02 2174.10 8.49 0.00 0.00 58564.78 5164.24 70883.74 00:33:05.960 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:05.960 crypto_ram1 : 1.02 2179.66 8.51 0.00 0.00 58100.56 5138.02 65431.14 00:33:05.960 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:05.960 crypto_ram2 : 1.02 16779.56 65.55 0.00 0.00 7535.32 2254.44 9909.04 00:33:05.960 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:05.960 crypto_ram3 : 1.02 16758.28 65.46 0.00 0.00 7514.02 2254.44 7864.32 00:33:05.960 =================================================================================================================== 00:33:05.960 Total : 37891.60 148.01 0.00 0.00 13385.06 2254.44 70883.74 00:33:06.218 00:33:06.218 real 0m4.051s 00:33:06.218 user 0m3.683s 00:33:06.218 sys 0m0.314s 00:33:06.218 12:14:52 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:06.218 12:14:52 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:33:06.218 ************************************ 00:33:06.218 END TEST bdev_write_zeroes 00:33:06.218 ************************************ 00:33:06.218 12:14:52 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:06.218 12:14:52 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:06.218 12:14:52 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:06.218 12:14:52 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:06.218 ************************************ 00:33:06.218 START TEST bdev_json_nonenclosed 00:33:06.218 ************************************ 00:33:06.218 12:14:52 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:06.476 [2024-07-25 12:14:52.367330] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:06.477 [2024-07-25 12:14:52.367391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149326 ] 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:06.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.477 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:06.477 [2024-07-25 12:14:52.498855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.477 [2024-07-25 12:14:52.583074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.477 [2024-07-25 12:14:52.583134] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:06.477 [2024-07-25 12:14:52.583156] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:06.477 [2024-07-25 12:14:52.583168] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:06.736 00:33:06.736 real 0m0.349s 00:33:06.736 user 0m0.202s 00:33:06.736 sys 0m0.145s 00:33:06.736 12:14:52 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:06.736 12:14:52 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:33:06.736 ************************************ 00:33:06.736 END TEST bdev_json_nonenclosed 00:33:06.736 ************************************ 00:33:06.736 12:14:52 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:06.736 12:14:52 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:06.736 12:14:52 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:06.736 12:14:52 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:06.736 ************************************ 00:33:06.736 START TEST bdev_json_nonarray 00:33:06.736 ************************************ 00:33:06.736 12:14:52 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:06.736 [2024-07-25 12:14:52.814549] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:06.736 [2024-07-25 12:14:52.814605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149355 ] 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:06.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.995 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:06.995 [2024-07-25 12:14:52.945301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.995 [2024-07-25 12:14:53.028441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.995 [2024-07-25 12:14:53.028510] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:06.995 [2024-07-25 12:14:53.028525] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:06.995 [2024-07-25 12:14:53.028536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:07.253 00:33:07.253 real 0m0.358s 00:33:07.253 user 0m0.200s 00:33:07.253 sys 0m0.156s 00:33:07.253 12:14:53 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.253 12:14:53 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:33:07.253 ************************************ 00:33:07.253 END TEST bdev_json_nonarray 00:33:07.253 ************************************ 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:33:07.253 12:14:53 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:33:07.253 00:33:07.253 real 1m10.280s 00:33:07.253 user 2m53.222s 00:33:07.253 sys 0m8.468s 00:33:07.253 12:14:53 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.253 12:14:53 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:07.253 ************************************ 00:33:07.253 END TEST blockdev_crypto_qat 00:33:07.253 ************************************ 00:33:07.253 12:14:53 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:33:07.253 12:14:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:07.253 12:14:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.253 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:33:07.253 ************************************ 00:33:07.253 START TEST chaining 00:33:07.253 ************************************ 00:33:07.253 12:14:53 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:33:07.253 * Looking for test storage... 00:33:07.253 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:07.253 12:14:53 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@7 -- # uname -s 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.253 12:14:53 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.511 12:14:53 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:33:07.511 12:14:53 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.511 12:14:53 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.511 12:14:53 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.511 12:14:53 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.512 12:14:53 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.512 12:14:53 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.512 12:14:53 chaining -- paths/export.sh@5 -- # export PATH 00:33:07.512 12:14:53 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@47 -- # : 0 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:07.512 12:14:53 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:33:07.512 12:14:53 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:33:07.512 12:14:53 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:33:07.512 12:14:53 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:33:07.512 12:14:53 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:33:07.512 12:14:53 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.512 12:14:53 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:07.512 12:14:53 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:07.512 12:14:53 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:33:07.512 12:14:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@296 -- # e810=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@297 -- # x722=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@298 -- # mlx=() 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:33:17.475 Found 0000:20:00.0 (0x8086 - 0x159b) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:33:17.475 Found 0000:20:00.1 (0x8086 - 0x159b) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:33:17.475 Found net devices under 0000:20:00.0: cvl_0_0 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:33:17.475 Found net devices under 0000:20:00.1: cvl_0_1 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:17.475 12:15:01 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.476 12:15:01 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:17.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:33:17.476 00:33:17.476 --- 10.0.0.2 ping statistics --- 00:33:17.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.476 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:17.476 00:33:17.476 --- 10.0.0.1 ping statistics --- 00:33:17.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.476 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@422 -- # return 0 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:17.476 12:15:02 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@481 -- # nvmfpid=153750 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@482 -- # waitforlisten 153750 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@831 -- # '[' -z 153750 ']' 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.476 12:15:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.476 12:15:02 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:17.476 [2024-07-25 12:15:02.271714] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:17.476 [2024-07-25 12:15:02.271773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:17.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.476 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:17.476 [2024-07-25 12:15:02.399687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.476 [2024-07-25 12:15:02.486604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.476 [2024-07-25 12:15:02.486649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.476 [2024-07-25 12:15:02.486662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.476 [2024-07-25 12:15:02.486674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.476 [2024-07-25 12:15:02.486684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.476 [2024-07-25 12:15:02.486710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.476 12:15:03 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.476 12:15:03 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:17.476 12:15:03 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:17.476 12:15:03 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:17.476 12:15:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.476 12:15:03 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.476 12:15:03 chaining -- bdev/chaining.sh@69 -- # mktemp 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.ajz7Lkgt9y 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@69 -- # mktemp 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.sGLF27gvni 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.477 malloc0 00:33:17.477 true 00:33:17.477 true 00:33:17.477 [2024-07-25 12:15:03.237876] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:17.477 crypto0 00:33:17.477 [2024-07-25 12:15:03.245902] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:33:17.477 crypto1 00:33:17.477 [2024-07-25 12:15:03.254013] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.477 [2024-07-25 12:15:03.270217] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@85 -- # update_stats 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:17.477 12:15:03 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.ajz7Lkgt9y bs=1K count=64 00:33:17.477 64+0 records in 00:33:17.477 64+0 records out 00:33:17.477 65536 bytes (66 kB, 64 KiB) copied, 0.00105624 s, 62.0 MB/s 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.ajz7Lkgt9y --ob Nvme0n1 --bs 65536 --count 1 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@25 -- # local config 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:17.477 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:17.477 "subsystems": [ 00:33:17.477 { 00:33:17.477 "subsystem": "bdev", 00:33:17.477 "config": [ 00:33:17.477 { 00:33:17.477 "method": "bdev_nvme_attach_controller", 00:33:17.477 "params": { 00:33:17.477 "trtype": "tcp", 00:33:17.477 "adrfam": "IPv4", 00:33:17.477 "name": "Nvme0", 00:33:17.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.477 "traddr": "10.0.0.2", 00:33:17.477 "trsvcid": "4420" 00:33:17.477 } 00:33:17.477 }, 00:33:17.477 { 00:33:17.477 "method": "bdev_set_options", 00:33:17.477 "params": { 00:33:17.477 "bdev_auto_examine": false 00:33:17.477 } 00:33:17.477 } 00:33:17.477 ] 00:33:17.477 } 00:33:17.477 ] 00:33:17.477 }' 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.ajz7Lkgt9y --ob Nvme0n1 --bs 65536 --count 1 00:33:17.477 12:15:03 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:17.477 "subsystems": [ 00:33:17.477 { 00:33:17.477 "subsystem": "bdev", 00:33:17.477 "config": [ 00:33:17.477 { 00:33:17.477 "method": "bdev_nvme_attach_controller", 00:33:17.477 "params": { 00:33:17.477 "trtype": "tcp", 00:33:17.477 "adrfam": "IPv4", 00:33:17.477 "name": "Nvme0", 00:33:17.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.477 "traddr": "10.0.0.2", 00:33:17.477 "trsvcid": "4420" 00:33:17.477 } 00:33:17.477 }, 00:33:17.477 { 00:33:17.477 "method": "bdev_set_options", 00:33:17.477 "params": { 00:33:17.477 "bdev_auto_examine": false 00:33:17.477 } 00:33:17.477 } 00:33:17.477 ] 00:33:17.477 } 00:33:17.477 ] 00:33:17.477 }' 00:33:17.477 [2024-07-25 12:15:03.569771] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:17.477 [2024-07-25 12:15:03.569830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153920 ] 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.735 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:17.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:17.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:17.736 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:17.736 [2024-07-25 12:15:03.701208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.736 [2024-07-25 12:15:03.784508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.251  Copying: 64/64 [kB] (average 62 MBps) 00:33:18.251 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:18.251 12:15:04 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:18.251 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.251 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.251 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.509 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@96 -- # update_stats 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.509 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:18.510 12:15:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.510 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.768 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:18.768 12:15:04 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.768 12:15:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.768 12:15:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.sGLF27gvni --ib Nvme0n1 --bs 65536 --count 1 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@25 -- # local config 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:18.768 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:18.768 "subsystems": [ 00:33:18.768 { 00:33:18.768 "subsystem": "bdev", 00:33:18.768 "config": [ 00:33:18.768 { 00:33:18.768 "method": "bdev_nvme_attach_controller", 00:33:18.768 "params": { 00:33:18.768 "trtype": "tcp", 00:33:18.768 "adrfam": "IPv4", 00:33:18.768 "name": "Nvme0", 00:33:18.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.768 "traddr": "10.0.0.2", 00:33:18.768 "trsvcid": "4420" 00:33:18.768 } 00:33:18.768 }, 00:33:18.768 { 00:33:18.768 "method": "bdev_set_options", 00:33:18.768 "params": { 00:33:18.768 "bdev_auto_examine": false 00:33:18.768 } 00:33:18.768 } 00:33:18.768 ] 00:33:18.768 } 00:33:18.768 ] 00:33:18.768 }' 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.sGLF27gvni --ib Nvme0n1 --bs 65536 --count 1 00:33:18.768 12:15:04 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:18.768 "subsystems": [ 00:33:18.768 { 00:33:18.768 "subsystem": "bdev", 00:33:18.768 "config": [ 00:33:18.768 { 00:33:18.768 "method": "bdev_nvme_attach_controller", 00:33:18.768 "params": { 00:33:18.768 "trtype": "tcp", 00:33:18.768 "adrfam": "IPv4", 00:33:18.768 "name": "Nvme0", 00:33:18.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.768 "traddr": "10.0.0.2", 00:33:18.768 "trsvcid": "4420" 00:33:18.768 } 00:33:18.768 }, 00:33:18.768 { 00:33:18.768 "method": "bdev_set_options", 00:33:18.768 "params": { 00:33:18.768 "bdev_auto_examine": false 00:33:18.768 } 00:33:18.768 } 00:33:18.768 ] 00:33:18.768 } 00:33:18.768 ] 00:33:18.768 }' 00:33:18.768 [2024-07-25 12:15:04.844394] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:18.768 [2024-07-25 12:15:04.844523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154365 ] 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.026 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:19.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:19.027 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:19.027 [2024-07-25 12:15:05.051509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.027 [2024-07-25 12:15:05.135308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.593  Copying: 64/64 [kB] (average 31 MBps) 00:33:19.593 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:19.593 12:15:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.593 12:15:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:19.593 12:15:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:19.593 12:15:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:19.593 12:15:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.593 12:15:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:19.850 12:15:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.850 12:15:05 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:33:19.850 12:15:05 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:33:19.850 12:15:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:19.850 12:15:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:19.850 12:15:05 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:19.851 12:15:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.851 12:15:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:19.851 12:15:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:19.851 12:15:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.851 12:15:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:19.851 12:15:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.ajz7Lkgt9y /tmp/tmp.sGLF27gvni 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@25 -- # local config 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:19.851 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:19.851 "subsystems": [ 00:33:19.851 { 00:33:19.851 "subsystem": "bdev", 00:33:19.851 "config": [ 00:33:19.851 { 00:33:19.851 "method": "bdev_nvme_attach_controller", 00:33:19.851 "params": { 00:33:19.851 "trtype": "tcp", 00:33:19.851 "adrfam": "IPv4", 00:33:19.851 "name": "Nvme0", 00:33:19.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.851 "traddr": "10.0.0.2", 00:33:19.851 "trsvcid": "4420" 00:33:19.851 } 00:33:19.851 }, 00:33:19.851 { 00:33:19.851 "method": "bdev_set_options", 00:33:19.851 "params": { 00:33:19.851 "bdev_auto_examine": false 00:33:19.851 } 00:33:19.851 } 00:33:19.851 ] 00:33:19.851 } 00:33:19.851 ] 00:33:19.851 }' 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:33:19.851 12:15:05 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:19.851 "subsystems": [ 00:33:19.851 { 00:33:19.851 "subsystem": "bdev", 00:33:19.851 "config": [ 00:33:19.851 { 00:33:19.851 "method": "bdev_nvme_attach_controller", 00:33:19.851 "params": { 00:33:19.851 "trtype": "tcp", 00:33:19.851 "adrfam": "IPv4", 00:33:19.851 "name": "Nvme0", 00:33:19.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.851 "traddr": "10.0.0.2", 00:33:19.851 "trsvcid": "4420" 00:33:19.851 } 00:33:19.851 }, 00:33:19.851 { 00:33:19.851 "method": "bdev_set_options", 00:33:19.851 "params": { 00:33:19.851 "bdev_auto_examine": false 00:33:19.851 } 00:33:19.851 } 00:33:19.851 ] 00:33:19.851 } 00:33:19.851 ] 00:33:19.851 }' 00:33:19.851 [2024-07-25 12:15:05.955017] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:19.851 [2024-07-25 12:15:05.955080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154824 ] 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:20.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.109 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:20.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.110 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:20.110 [2024-07-25 12:15:06.089397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.110 [2024-07-25 12:15:06.171360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.624  Copying: 64/64 [kB] (average 20 MBps) 00:33:20.624 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@106 -- # update_stats 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:20.625 12:15:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.ajz7Lkgt9y --ob Nvme0n1 --bs 4096 --count 16 00:33:20.625 12:15:06 chaining -- bdev/chaining.sh@25 -- # local config 00:33:20.882 12:15:06 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:20.882 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:20.882 12:15:06 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:20.882 12:15:06 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:20.882 "subsystems": [ 00:33:20.882 { 00:33:20.882 "subsystem": "bdev", 00:33:20.882 "config": [ 00:33:20.882 { 00:33:20.882 "method": "bdev_nvme_attach_controller", 00:33:20.882 "params": { 00:33:20.882 "trtype": "tcp", 00:33:20.882 "adrfam": "IPv4", 00:33:20.882 "name": "Nvme0", 00:33:20.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.882 "traddr": "10.0.0.2", 00:33:20.882 "trsvcid": "4420" 00:33:20.882 } 00:33:20.882 }, 00:33:20.882 { 00:33:20.882 "method": "bdev_set_options", 00:33:20.882 "params": { 00:33:20.882 "bdev_auto_examine": false 00:33:20.882 } 00:33:20.882 } 00:33:20.882 ] 00:33:20.882 } 00:33:20.882 ] 00:33:20.882 }' 00:33:20.882 12:15:06 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.ajz7Lkgt9y --ob Nvme0n1 --bs 4096 --count 16 00:33:20.882 12:15:06 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:20.882 "subsystems": [ 00:33:20.882 { 00:33:20.882 "subsystem": "bdev", 00:33:20.882 "config": [ 00:33:20.882 { 00:33:20.882 "method": "bdev_nvme_attach_controller", 00:33:20.882 "params": { 00:33:20.882 "trtype": "tcp", 00:33:20.882 "adrfam": "IPv4", 00:33:20.882 "name": "Nvme0", 00:33:20.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.882 "traddr": "10.0.0.2", 00:33:20.882 "trsvcid": "4420" 00:33:20.882 } 00:33:20.883 }, 00:33:20.883 { 00:33:20.883 "method": "bdev_set_options", 00:33:20.883 "params": { 00:33:20.883 "bdev_auto_examine": false 00:33:20.883 } 00:33:20.883 } 00:33:20.883 ] 00:33:20.883 } 00:33:20.883 ] 00:33:20.883 }' 00:33:20.883 [2024-07-25 12:15:06.830257] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:20.883 [2024-07-25 12:15:06.830316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154948 ] 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:20.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:20.883 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:20.883 [2024-07-25 12:15:06.961563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.140 [2024-07-25 12:15:07.043823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.655  Copying: 64/64 [kB] (average 15 MBps) 00:33:21.655 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.655 12:15:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:21.655 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@114 -- # update_stats 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:21.913 12:15:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@117 -- # : 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.sGLF27gvni --ib Nvme0n1 --bs 4096 --count 16 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@25 -- # local config 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:21.913 12:15:07 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:21.913 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:21.913 12:15:08 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:21.913 "subsystems": [ 00:33:21.913 { 00:33:21.913 "subsystem": "bdev", 00:33:21.913 "config": [ 00:33:21.913 { 00:33:21.913 "method": "bdev_nvme_attach_controller", 00:33:21.913 "params": { 00:33:21.913 "trtype": "tcp", 00:33:21.913 "adrfam": "IPv4", 00:33:21.913 "name": "Nvme0", 00:33:21.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.913 "traddr": "10.0.0.2", 00:33:21.913 "trsvcid": "4420" 00:33:21.913 } 00:33:21.913 }, 00:33:21.913 { 00:33:21.913 "method": "bdev_set_options", 00:33:21.913 "params": { 00:33:21.913 "bdev_auto_examine": false 00:33:21.913 } 00:33:21.913 } 00:33:21.913 ] 00:33:21.913 } 00:33:21.913 ] 00:33:21.913 }' 00:33:21.913 12:15:08 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.sGLF27gvni --ib Nvme0n1 --bs 4096 --count 16 00:33:21.913 12:15:08 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:21.913 "subsystems": [ 00:33:21.913 { 00:33:21.913 "subsystem": "bdev", 00:33:21.913 "config": [ 00:33:21.913 { 00:33:21.913 "method": "bdev_nvme_attach_controller", 00:33:21.913 "params": { 00:33:21.913 "trtype": "tcp", 00:33:21.913 "adrfam": "IPv4", 00:33:21.913 "name": "Nvme0", 00:33:21.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.913 "traddr": "10.0.0.2", 00:33:21.913 "trsvcid": "4420" 00:33:21.913 } 00:33:21.913 }, 00:33:21.913 { 00:33:21.913 "method": "bdev_set_options", 00:33:21.913 "params": { 00:33:21.913 "bdev_auto_examine": false 00:33:21.913 } 00:33:21.913 } 00:33:21.913 ] 00:33:21.913 } 00:33:21.913 ] 00:33:21.913 }' 00:33:22.171 [2024-07-25 12:15:08.065795] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:22.171 [2024-07-25 12:15:08.065856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155165 ] 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:22.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:22.171 [2024-07-25 12:15:08.197878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.171 [2024-07-25 12:15:08.280970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.736  Copying: 64/64 [kB] (average 488 kBps) 00:33:22.736 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:22.736 12:15:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:22.736 12:15:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 12:15:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:22.993 12:15:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:22.993 12:15:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.993 12:15:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:22.993 12:15:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.ajz7Lkgt9y /tmp/tmp.sGLF27gvni 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.ajz7Lkgt9y /tmp/tmp.sGLF27gvni 00:33:22.993 12:15:09 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:33:22.993 12:15:09 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:22.993 12:15:09 chaining -- nvmf/common.sh@117 -- # sync 00:33:22.993 12:15:09 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:22.993 12:15:09 chaining -- nvmf/common.sh@120 -- # set +e 00:33:22.993 12:15:09 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:22.993 12:15:09 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:22.993 rmmod nvme_tcp 00:33:22.993 rmmod nvme_fabrics 00:33:22.993 rmmod nvme_keyring 00:33:23.251 12:15:09 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:23.251 12:15:09 chaining -- nvmf/common.sh@124 -- # set -e 00:33:23.251 12:15:09 chaining -- nvmf/common.sh@125 -- # return 0 00:33:23.251 12:15:09 chaining -- nvmf/common.sh@489 -- # '[' -n 153750 ']' 00:33:23.251 12:15:09 chaining -- nvmf/common.sh@490 -- # killprocess 153750 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@950 -- # '[' -z 153750 ']' 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@954 -- # kill -0 153750 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@955 -- # uname 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153750 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153750' 00:33:23.251 killing process with pid 153750 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@969 -- # kill 153750 00:33:23.251 12:15:09 chaining -- common/autotest_common.sh@974 -- # wait 153750 00:33:23.509 12:15:09 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:23.509 12:15:09 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:23.509 12:15:09 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:23.509 12:15:09 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:23.509 12:15:09 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:23.509 12:15:09 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.509 12:15:09 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:23.509 12:15:09 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.442 12:15:11 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:25.442 12:15:11 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:33:25.442 12:15:11 chaining -- bdev/chaining.sh@132 -- # bperfpid=155749 00:33:25.442 12:15:11 chaining -- bdev/chaining.sh@134 -- # waitforlisten 155749 00:33:25.442 12:15:11 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:33:25.442 12:15:11 chaining -- common/autotest_common.sh@831 -- # '[' -z 155749 ']' 00:33:25.442 12:15:11 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.442 12:15:11 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.442 12:15:11 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.442 12:15:11 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.442 12:15:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:25.442 [2024-07-25 12:15:11.502265] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:25.442 [2024-07-25 12:15:11.502329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155749 ] 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:25.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.700 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:25.700 [2024-07-25 12:15:11.636586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.700 [2024-07-25 12:15:11.719051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.634 12:15:12 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:26.634 12:15:12 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:26.634 12:15:12 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:33:26.634 12:15:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.634 12:15:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:26.634 malloc0 00:33:26.634 true 00:33:26.634 true 00:33:26.634 [2024-07-25 12:15:12.527408] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:26.634 crypto0 00:33:26.634 [2024-07-25 12:15:12.535433] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:33:26.634 crypto1 00:33:26.634 12:15:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.634 12:15:12 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:26.634 Running I/O for 5 seconds... 00:33:31.895 00:33:31.895 Latency(us) 00:33:31.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.895 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:33:31.895 Verification LBA range: start 0x0 length 0x2000 00:33:31.895 crypto1 : 5.01 12017.52 46.94 0.00 0.00 21242.09 557.06 13946.06 00:33:31.895 =================================================================================================================== 00:33:31.895 Total : 12017.52 46.94 0.00 0.00 21242.09 557.06 13946.06 00:33:31.895 0 00:33:31.895 12:15:17 chaining -- bdev/chaining.sh@146 -- # killprocess 155749 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@950 -- # '[' -z 155749 ']' 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@954 -- # kill -0 155749 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@955 -- # uname 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155749 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155749' 00:33:31.895 killing process with pid 155749 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@969 -- # kill 155749 00:33:31.895 Received shutdown signal, test time was about 5.000000 seconds 00:33:31.895 00:33:31.895 Latency(us) 00:33:31.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.895 =================================================================================================================== 00:33:31.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@974 -- # wait 155749 00:33:31.895 12:15:17 chaining -- bdev/chaining.sh@152 -- # bperfpid=156816 00:33:31.895 12:15:17 chaining -- bdev/chaining.sh@154 -- # waitforlisten 156816 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@831 -- # '[' -z 156816 ']' 00:33:31.895 12:15:17 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:31.895 12:15:17 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.896 12:15:17 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:31.896 12:15:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:31.896 [2024-07-25 12:15:17.958935] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:31.896 [2024-07-25 12:15:17.959000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156816 ] 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:32.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:32.154 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:32.154 [2024-07-25 12:15:18.090388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.154 [2024-07-25 12:15:18.176606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.088 12:15:18 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:33.088 12:15:18 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:33.088 12:15:18 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:33:33.088 12:15:18 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.088 12:15:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:33.088 malloc0 00:33:33.088 true 00:33:33.088 true 00:33:33.088 [2024-07-25 12:15:19.002261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:33:33.088 [2024-07-25 12:15:19.002308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.088 [2024-07-25 12:15:19.002326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10d83b0 00:33:33.088 [2024-07-25 12:15:19.002337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.088 [2024-07-25 12:15:19.003328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.088 [2024-07-25 12:15:19.003353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:33:33.088 pt0 00:33:33.088 [2024-07-25 12:15:19.010292] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:33.088 crypto0 00:33:33.088 [2024-07-25 12:15:19.018311] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:33:33.088 crypto1 00:33:33.088 12:15:19 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.088 12:15:19 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:33.088 Running I/O for 5 seconds... 00:33:38.351 00:33:38.351 Latency(us) 00:33:38.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.351 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:33:38.351 Verification LBA range: start 0x0 length 0x2000 00:33:38.351 crypto1 : 5.01 9710.76 37.93 0.00 0.00 26290.88 616.04 16672.36 00:33:38.351 =================================================================================================================== 00:33:38.351 Total : 9710.76 37.93 0.00 0.00 26290.88 616.04 16672.36 00:33:38.351 0 00:33:38.351 12:15:24 chaining -- bdev/chaining.sh@167 -- # killprocess 156816 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@950 -- # '[' -z 156816 ']' 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@954 -- # kill -0 156816 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@955 -- # uname 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 156816 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 156816' 00:33:38.351 killing process with pid 156816 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@969 -- # kill 156816 00:33:38.351 Received shutdown signal, test time was about 5.000000 seconds 00:33:38.351 00:33:38.351 Latency(us) 00:33:38.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.351 =================================================================================================================== 00:33:38.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@974 -- # wait 156816 00:33:38.351 12:15:24 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:33:38.351 12:15:24 chaining -- bdev/chaining.sh@170 -- # killprocess 156816 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@950 -- # '[' -z 156816 ']' 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@954 -- # kill -0 156816 00:33:38.351 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (156816) - No such process 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 156816 is not found' 00:33:38.351 Process with pid 156816 is not found 00:33:38.351 12:15:24 chaining -- bdev/chaining.sh@171 -- # wait 156816 00:33:38.351 12:15:24 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:33:38.351 12:15:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@296 -- # e810=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@297 -- # x722=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@298 -- # mlx=() 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.351 12:15:24 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:33:38.352 Found 0000:20:00.0 (0x8086 - 0x159b) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:33:38.352 Found 0000:20:00.1 (0x8086 - 0x159b) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:33:38.352 Found net devices under 0000:20:00.0: cvl_0_0 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:33:38.352 Found net devices under 0000:20:00.1: cvl_0_1 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.352 12:15:24 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:38.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:33:38.610 00:33:38.610 --- 10.0.0.2 ping statistics --- 00:33:38.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.610 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:38.610 00:33:38.610 --- 10.0.0.1 ping statistics --- 00:33:38.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.610 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@422 -- # return 0 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:38.610 12:15:24 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@481 -- # nvmfpid=158023 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@482 -- # waitforlisten 158023 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@831 -- # '[' -z 158023 ']' 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.610 12:15:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:38.610 12:15:24 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:38.867 [2024-07-25 12:15:24.783544] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:38.867 [2024-07-25 12:15:24.783604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:38.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.867 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:38.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:38.868 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:38.868 [2024-07-25 12:15:24.911694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.125 [2024-07-25 12:15:24.996763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.125 [2024-07-25 12:15:24.996807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.125 [2024-07-25 12:15:24.996820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.125 [2024-07-25 12:15:24.996832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.125 [2024-07-25 12:15:24.996842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.125 [2024-07-25 12:15:24.996876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.057 12:15:25 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:40.057 12:15:25 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:40.057 12:15:25 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.057 12:15:25 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:40.057 12:15:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:40.057 12:15:25 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.057 12:15:25 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:33:40.057 12:15:25 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.057 12:15:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:40.057 malloc0 00:33:40.057 [2024-07-25 12:15:26.018923] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.057 [2024-07-25 12:15:26.035112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.057 12:15:26 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:33:40.057 12:15:26 chaining -- bdev/chaining.sh@189 -- # bperfpid=158184 00:33:40.057 12:15:26 chaining -- bdev/chaining.sh@191 -- # waitforlisten 158184 /var/tmp/bperf.sock 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@831 -- # '[' -z 158184 ']' 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:40.057 12:15:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:40.057 12:15:26 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:33:40.057 [2024-07-25 12:15:26.103459] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:40.057 [2024-07-25 12:15:26.103517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158184 ] 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:40.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.057 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:40.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.315 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:40.315 [2024-07-25 12:15:26.234495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.315 [2024-07-25 12:15:26.320761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.881 12:15:26 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:40.881 12:15:26 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:41.137 12:15:26 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:33:41.137 12:15:27 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:33:41.700 [2024-07-25 12:15:27.648154] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:41.700 nvme0n1 00:33:41.700 true 00:33:41.700 crypto0 00:33:41.700 12:15:27 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.700 Running I/O for 5 seconds... 00:33:46.994 00:33:46.994 Latency(us) 00:33:46.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.994 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:33:46.994 Verification LBA range: start 0x0 length 0x2000 00:33:46.994 crypto0 : 5.02 9549.05 37.30 0.00 0.00 26725.02 3250.59 22020.10 00:33:46.994 =================================================================================================================== 00:33:46.994 Total : 9549.05 37.30 0.00 0.00 26725.02 3250.59 22020.10 00:33:46.994 0 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:46.994 12:15:32 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@205 -- # sequence=95852 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:46.994 12:15:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@206 -- # encrypt=47926 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:47.252 12:15:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@207 -- # decrypt=47926 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:33:47.508 12:15:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:47.765 12:15:33 chaining -- bdev/chaining.sh@208 -- # crc32c=95852 00:33:47.765 12:15:33 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:33:47.765 12:15:33 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:33:47.765 12:15:33 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:33:47.765 12:15:33 chaining -- bdev/chaining.sh@214 -- # killprocess 158184 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@950 -- # '[' -z 158184 ']' 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@954 -- # kill -0 158184 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@955 -- # uname 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158184 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:47.765 12:15:33 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:47.766 12:15:33 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158184' 00:33:47.766 killing process with pid 158184 00:33:47.766 12:15:33 chaining -- common/autotest_common.sh@969 -- # kill 158184 00:33:47.766 Received shutdown signal, test time was about 5.000000 seconds 00:33:47.766 00:33:47.766 Latency(us) 00:33:47.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.766 =================================================================================================================== 00:33:47.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.766 12:15:33 chaining -- common/autotest_common.sh@974 -- # wait 158184 00:33:48.023 12:15:34 chaining -- bdev/chaining.sh@219 -- # bperfpid=159511 00:33:48.023 12:15:34 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:33:48.023 12:15:34 chaining -- bdev/chaining.sh@221 -- # waitforlisten 159511 /var/tmp/bperf.sock 00:33:48.023 12:15:34 chaining -- common/autotest_common.sh@831 -- # '[' -z 159511 ']' 00:33:48.023 12:15:34 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.023 12:15:34 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:48.023 12:15:34 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.023 12:15:34 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:48.023 12:15:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:48.023 [2024-07-25 12:15:34.091497] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:33:48.023 [2024-07-25 12:15:34.091560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159511 ] 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.280 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:48.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.281 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:48.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.281 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:48.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.281 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:48.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.281 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:48.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.281 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:48.281 [2024-07-25 12:15:34.225947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.281 [2024-07-25 12:15:34.305310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.213 12:15:34 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:49.213 12:15:34 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:49.213 12:15:34 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:33:49.213 12:15:34 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:33:49.471 [2024-07-25 12:15:35.374210] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:49.471 nvme0n1 00:33:49.471 true 00:33:49.471 crypto0 00:33:49.471 12:15:35 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.471 Running I/O for 5 seconds... 00:33:54.735 00:33:54.735 Latency(us) 00:33:54.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.735 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:33:54.735 Verification LBA range: start 0x0 length 0x200 00:33:54.735 crypto0 : 5.01 1883.21 117.70 0.00 0.00 16646.21 1376.26 20132.66 00:33:54.735 =================================================================================================================== 00:33:54.735 Total : 1883.21 117.70 0.00 0.00 16646.21 1376.26 20132.66 00:33:54.735 0 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@233 -- # sequence=18862 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:54.735 12:15:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@234 -- # encrypt=9431 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:54.993 12:15:41 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@235 -- # decrypt=9431 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:33:55.251 12:15:41 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:55.510 12:15:41 chaining -- bdev/chaining.sh@236 -- # crc32c=18862 00:33:55.510 12:15:41 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:33:55.510 12:15:41 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:33:55.510 12:15:41 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:33:55.510 12:15:41 chaining -- bdev/chaining.sh@242 -- # killprocess 159511 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@950 -- # '[' -z 159511 ']' 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@954 -- # kill -0 159511 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@955 -- # uname 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159511 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159511' 00:33:55.510 killing process with pid 159511 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@969 -- # kill 159511 00:33:55.510 Received shutdown signal, test time was about 5.000000 seconds 00:33:55.510 00:33:55.510 Latency(us) 00:33:55.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.510 =================================================================================================================== 00:33:55.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.510 12:15:41 chaining -- common/autotest_common.sh@974 -- # wait 159511 00:33:55.769 12:15:41 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@117 -- # sync 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@120 -- # set +e 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:55.769 rmmod nvme_tcp 00:33:55.769 rmmod nvme_fabrics 00:33:55.769 rmmod nvme_keyring 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@124 -- # set -e 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@125 -- # return 0 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@489 -- # '[' -n 158023 ']' 00:33:55.769 12:15:41 chaining -- nvmf/common.sh@490 -- # killprocess 158023 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@950 -- # '[' -z 158023 ']' 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@954 -- # kill -0 158023 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@955 -- # uname 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158023 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158023' 00:33:55.769 killing process with pid 158023 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@969 -- # kill 158023 00:33:55.769 12:15:41 chaining -- common/autotest_common.sh@974 -- # wait 158023 00:33:56.028 12:15:42 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:56.028 12:15:42 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:56.028 12:15:42 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:56.028 12:15:42 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:56.028 12:15:42 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:56.028 12:15:42 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.028 12:15:42 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:56.028 12:15:42 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.566 12:15:44 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:58.566 12:15:44 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:33:58.566 00:33:58.566 real 0m50.889s 00:33:58.566 user 1m1.843s 00:33:58.566 sys 0m13.179s 00:33:58.566 12:15:44 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:58.566 12:15:44 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:58.566 ************************************ 00:33:58.566 END TEST chaining 00:33:58.566 ************************************ 00:33:58.566 12:15:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:58.566 12:15:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:58.566 12:15:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:58.566 12:15:44 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:33:58.566 12:15:44 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:33:58.566 12:15:44 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:33:58.566 12:15:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:58.566 12:15:44 -- common/autotest_common.sh@10 -- # set +x 00:33:58.566 12:15:44 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:33:58.566 12:15:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:58.566 12:15:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:58.567 12:15:44 -- common/autotest_common.sh@10 -- # set +x 00:34:05.113 INFO: APP EXITING 00:34:05.113 INFO: killing all VMs 00:34:05.113 INFO: killing vhost app 00:34:05.113 INFO: EXIT DONE 00:34:08.397 Waiting for block devices as requested 00:34:08.672 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:08.672 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:08.672 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:08.948 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:08.948 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:08.948 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:09.206 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:09.206 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:09.206 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:09.464 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:09.464 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:09.464 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:09.723 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:09.723 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:09.723 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:09.980 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:09.980 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:15.241 Cleaning 00:34:15.241 Removing: /var/run/dpdk/spdk0/config 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:15.241 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:15.241 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:15.241 Removing: /dev/shm/nvmf_trace.0 00:34:15.241 Removing: /dev/shm/spdk_tgt_trace.pid4041891 00:34:15.241 Removing: /var/run/dpdk/spdk0 00:34:15.241 Removing: /var/run/dpdk/spdk_pid100703 00:34:15.241 Removing: /var/run/dpdk/spdk_pid102983 00:34:15.241 Removing: /var/run/dpdk/spdk_pid105157 00:34:15.241 Removing: /var/run/dpdk/spdk_pid106961 00:34:15.241 Removing: /var/run/dpdk/spdk_pid109196 00:34:15.241 Removing: /var/run/dpdk/spdk_pid111373 00:34:15.241 Removing: /var/run/dpdk/spdk_pid113513 00:34:15.241 Removing: /var/run/dpdk/spdk_pid115329 00:34:15.241 Removing: /var/run/dpdk/spdk_pid115993 00:34:15.241 Removing: /var/run/dpdk/spdk_pid116535 00:34:15.241 Removing: /var/run/dpdk/spdk_pid119155 00:34:15.241 Removing: /var/run/dpdk/spdk_pid121970 00:34:15.241 Removing: /var/run/dpdk/spdk_pid124459 00:34:15.241 Removing: /var/run/dpdk/spdk_pid125790 00:34:15.241 Removing: /var/run/dpdk/spdk_pid127129 00:34:15.241 Removing: /var/run/dpdk/spdk_pid127923 00:34:15.241 Removing: /var/run/dpdk/spdk_pid127954 00:34:15.241 Removing: /var/run/dpdk/spdk_pid128094 00:34:15.241 Removing: /var/run/dpdk/spdk_pid128470 00:34:15.241 Removing: /var/run/dpdk/spdk_pid128583 00:34:15.241 Removing: /var/run/dpdk/spdk_pid129850 00:34:15.241 Removing: /var/run/dpdk/spdk_pid131965 00:34:15.241 Removing: /var/run/dpdk/spdk_pid133822 00:34:15.241 Removing: /var/run/dpdk/spdk_pid134864 00:34:15.241 Removing: /var/run/dpdk/spdk_pid1354 00:34:15.241 Removing: /var/run/dpdk/spdk_pid135916 00:34:15.241 Removing: /var/run/dpdk/spdk_pid136214 00:34:15.241 Removing: /var/run/dpdk/spdk_pid136241 00:34:15.241 Removing: /var/run/dpdk/spdk_pid136267 00:34:15.241 Removing: /var/run/dpdk/spdk_pid137402 00:34:15.241 Removing: /var/run/dpdk/spdk_pid138190 00:34:15.241 Removing: /var/run/dpdk/spdk_pid138733 00:34:15.241 Removing: /var/run/dpdk/spdk_pid141095 00:34:15.241 Removing: /var/run/dpdk/spdk_pid143457 00:34:15.241 Removing: /var/run/dpdk/spdk_pid145859 00:34:15.241 Removing: /var/run/dpdk/spdk_pid147196 00:34:15.241 Removing: /var/run/dpdk/spdk_pid148532 00:34:15.241 Removing: /var/run/dpdk/spdk_pid149326 00:34:15.241 Removing: /var/run/dpdk/spdk_pid149355 00:34:15.241 Removing: /var/run/dpdk/spdk_pid153920 00:34:15.241 Removing: /var/run/dpdk/spdk_pid154365 00:34:15.241 Removing: /var/run/dpdk/spdk_pid154824 00:34:15.241 Removing: /var/run/dpdk/spdk_pid154948 00:34:15.241 Removing: /var/run/dpdk/spdk_pid155165 00:34:15.241 Removing: /var/run/dpdk/spdk_pid155749 00:34:15.241 Removing: /var/run/dpdk/spdk_pid156816 00:34:15.241 Removing: /var/run/dpdk/spdk_pid158184 00:34:15.241 Removing: /var/run/dpdk/spdk_pid159511 00:34:15.241 Removing: /var/run/dpdk/spdk_pid18671 00:34:15.241 Removing: /var/run/dpdk/spdk_pid23520 00:34:15.241 Removing: /var/run/dpdk/spdk_pid24918 00:34:15.241 Removing: /var/run/dpdk/spdk_pid26148 00:34:15.241 Removing: /var/run/dpdk/spdk_pid29747 00:34:15.241 Removing: /var/run/dpdk/spdk_pid35776 00:34:15.241 Removing: /var/run/dpdk/spdk_pid38917 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4036877 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4040543 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4041891 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4042554 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4043627 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4043897 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4045360 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4045562 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4045935 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4049513 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4051495 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4051812 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4052141 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4052496 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4052972 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4053203 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4053402 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4053688 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4054656 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4057922 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4058190 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4058518 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4058783 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4058863 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4058993 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4059265 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4059531 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4059809 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4060081 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4060363 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4060637 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4060920 00:34:15.241 Removing: /var/run/dpdk/spdk_pid4061200 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4061480 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4061765 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4062051 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4062328 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4062618 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4062899 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4063187 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4063465 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4063756 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4064036 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4064324 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4064602 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4064932 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4065426 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4065716 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4066191 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4066549 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4066898 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4067375 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4067668 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4067982 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4068328 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4068748 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4069265 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4069474 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4074223 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4076481 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4078813 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4080316 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4081653 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4082011 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4082217 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4082242 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4087097 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4087674 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4088984 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4089270 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4095850 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4097908 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4099077 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4104264 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4106233 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4107363 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4112526 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4115662 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4116810 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4128173 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4130839 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4132006 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4143623 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4146039 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4147865 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4159203 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4163066 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4164332 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4177194 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4180156 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4181988 00:34:15.242 Removing: /var/run/dpdk/spdk_pid44403 00:34:15.242 Removing: /var/run/dpdk/spdk_pid4491 00:34:15.242 Removing: /var/run/dpdk/spdk_pid48335 00:34:15.242 Removing: /var/run/dpdk/spdk_pid54735 00:34:15.242 Removing: /var/run/dpdk/spdk_pid5826 00:34:15.242 Removing: /var/run/dpdk/spdk_pid58411 00:34:15.242 Removing: /var/run/dpdk/spdk_pid65904 00:34:15.242 Removing: /var/run/dpdk/spdk_pid68701 00:34:15.242 Removing: /var/run/dpdk/spdk_pid75922 00:34:15.242 Removing: /var/run/dpdk/spdk_pid78654 00:34:15.242 Removing: /var/run/dpdk/spdk_pid86017 00:34:15.242 Removing: /var/run/dpdk/spdk_pid89192 00:34:15.242 Removing: /var/run/dpdk/spdk_pid94368 00:34:15.242 Removing: /var/run/dpdk/spdk_pid94690 00:34:15.500 Removing: /var/run/dpdk/spdk_pid95173 00:34:15.500 Removing: /var/run/dpdk/spdk_pid95705 00:34:15.500 Removing: /var/run/dpdk/spdk_pid96314 00:34:15.500 Removing: /var/run/dpdk/spdk_pid97185 00:34:15.500 Removing: /var/run/dpdk/spdk_pid98123 00:34:15.500 Removing: /var/run/dpdk/spdk_pid98489 00:34:15.500 Clean 00:34:15.500 12:16:01 -- common/autotest_common.sh@1451 -- # return 0 00:34:15.500 12:16:01 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:34:15.500 12:16:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.500 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:34:15.500 12:16:01 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:34:15.500 12:16:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.500 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:34:15.500 12:16:01 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:34:15.500 12:16:01 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:34:15.500 12:16:01 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:34:15.500 12:16:01 -- spdk/autotest.sh@395 -- # hash lcov 00:34:15.500 12:16:01 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:15.500 12:16:01 -- spdk/autotest.sh@397 -- # hostname 00:34:15.500 12:16:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:34:15.758 geninfo: WARNING: invalid characters removed from testname! 00:34:33.834 12:16:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:37.117 12:16:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:39.646 12:16:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:41.578 12:16:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:43.483 12:16:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:46.009 12:16:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:48.545 12:16:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:48.545 12:16:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:48.545 12:16:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:48.545 12:16:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.545 12:16:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.545 12:16:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.545 12:16:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.545 12:16:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.545 12:16:34 -- paths/export.sh@5 -- $ export PATH 00:34:48.545 12:16:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.545 12:16:34 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:48.545 12:16:34 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:48.545 12:16:34 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721902594.XXXXXX 00:34:48.545 12:16:34 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721902594.hxRy9M 00:34:48.546 12:16:34 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:48.546 12:16:34 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:48.546 12:16:34 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:34:48.546 12:16:34 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:48.546 12:16:34 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:48.546 12:16:34 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:48.546 12:16:34 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:48.546 12:16:34 -- common/autotest_common.sh@10 -- $ set +x 00:34:48.546 12:16:34 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:34:48.546 12:16:34 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:48.546 12:16:34 -- pm/common@17 -- $ local monitor 00:34:48.546 12:16:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:48.546 12:16:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:48.546 12:16:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:48.546 12:16:34 -- pm/common@21 -- $ date +%s 00:34:48.546 12:16:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:48.546 12:16:34 -- pm/common@21 -- $ date +%s 00:34:48.546 12:16:34 -- pm/common@25 -- $ sleep 1 00:34:48.546 12:16:34 -- pm/common@21 -- $ date +%s 00:34:48.546 12:16:34 -- pm/common@21 -- $ date +%s 00:34:48.546 12:16:34 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902594 00:34:48.546 12:16:34 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902594 00:34:48.546 12:16:34 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902594 00:34:48.546 12:16:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902594 00:34:48.546 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902594_collect-cpu-load.pm.log 00:34:48.546 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902594_collect-vmstat.pm.log 00:34:48.546 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902594_collect-cpu-temp.pm.log 00:34:48.546 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902594_collect-bmc-pm.bmc.pm.log 00:34:49.481 12:16:35 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:49.481 12:16:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:49.481 12:16:35 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:49.481 12:16:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:49.481 12:16:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:49.481 12:16:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:49.481 12:16:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:49.481 12:16:35 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:49.481 12:16:35 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:34:49.481 12:16:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:49.481 12:16:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:49.481 12:16:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:49.481 12:16:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:49.481 12:16:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:49.481 12:16:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:49.481 12:16:35 -- pm/common@44 -- $ pid=172593 00:34:49.481 12:16:35 -- pm/common@50 -- $ kill -TERM 172593 00:34:49.481 12:16:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:49.481 12:16:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:49.481 12:16:35 -- pm/common@44 -- $ pid=172595 00:34:49.481 12:16:35 -- pm/common@50 -- $ kill -TERM 172595 00:34:49.481 12:16:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:49.481 12:16:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:49.481 12:16:35 -- pm/common@44 -- $ pid=172597 00:34:49.481 12:16:35 -- pm/common@50 -- $ kill -TERM 172597 00:34:49.481 12:16:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:49.481 12:16:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:49.481 12:16:35 -- pm/common@44 -- $ pid=172620 00:34:49.481 12:16:35 -- pm/common@50 -- $ sudo -E kill -TERM 172620 00:34:49.481 + [[ -n 3908259 ]] 00:34:49.481 + sudo kill 3908259 00:34:49.490 [Pipeline] } 00:34:49.510 [Pipeline] // stage 00:34:49.515 [Pipeline] } 00:34:49.532 [Pipeline] // timeout 00:34:49.537 [Pipeline] } 00:34:49.553 [Pipeline] // catchError 00:34:49.558 [Pipeline] } 00:34:49.575 [Pipeline] // wrap 00:34:49.581 [Pipeline] } 00:34:49.596 [Pipeline] // catchError 00:34:49.606 [Pipeline] stage 00:34:49.608 [Pipeline] { (Epilogue) 00:34:49.623 [Pipeline] catchError 00:34:49.624 [Pipeline] { 00:34:49.638 [Pipeline] echo 00:34:49.640 Cleanup processes 00:34:49.645 [Pipeline] sh 00:34:49.925 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:49.925 172697 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:34:49.925 173042 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:49.939 [Pipeline] sh 00:34:50.218 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:50.218 ++ grep -v 'sudo pgrep' 00:34:50.218 ++ awk '{print $1}' 00:34:50.218 + sudo kill -9 172697 00:34:50.229 [Pipeline] sh 00:34:50.508 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:50.508 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:34:58.617 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:35:03.919 [Pipeline] sh 00:35:04.199 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:04.199 Artifacts sizes are good 00:35:04.212 [Pipeline] archiveArtifacts 00:35:04.218 Archiving artifacts 00:35:04.353 [Pipeline] sh 00:35:04.632 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:35:04.647 [Pipeline] cleanWs 00:35:04.657 [WS-CLEANUP] Deleting project workspace... 00:35:04.657 [WS-CLEANUP] Deferred wipeout is used... 00:35:04.665 [WS-CLEANUP] done 00:35:04.667 [Pipeline] } 00:35:04.686 [Pipeline] // catchError 00:35:04.698 [Pipeline] sh 00:35:04.979 + logger -p user.info -t JENKINS-CI 00:35:04.988 [Pipeline] } 00:35:05.005 [Pipeline] // stage 00:35:05.011 [Pipeline] } 00:35:05.027 [Pipeline] // node 00:35:05.032 [Pipeline] End of Pipeline 00:35:05.051 Finished: SUCCESS